The Fast atmOspheric traCe gAs retrievaL (FOCAL) algorithm has been originally developed to derive XCO2 from OCO-2 measurements (Reuter et al., 2019a.b). Later, the FOCAL method has also been successfully applied to measurements of the Greenhouse gases Observing SATellites GOSAT and GOSAT-2. (Noël et al., 2021).
FOCAL has proven to be a fast and accurate retrieval method well suited for the challenges of forthcoming greenhouse gas missions producing large amounts of data. FOCAL is one of the foreseen operational algorithms for the forthcoming CO2M mission. The FOCAL retrieval results delivered by the University of Bremen are the baseline for the new GOSAT XCO2 products of the Copernicus Atmosphere Monitoring Service (CAMS).
In this presentation we will show recent results from GOSAT and GOSAT-2 FOCAL retrievals for XCO2 and other gases, e.g. methane (XCH4, both FP products and proxy), water vapour (XH2O) and HDO (δD). For GOSAT-2, we will also present results for carbon monoxide (XCO) and nitrous oxide (XN2O). This will include comparisons with independent data sets.
References:
Noël, S., M. Reuter, M. Buchwitz, J. Borchardt, M. Hilker, H. Bovensmann, J. P. Burrows, A. Di Noia, H. Suto, Y. Yoshida, M. Buschmann, N. M. Deutscher, D. G. Feist, D. W. T. Griffith, F. Hase, R. Kivi, I. Morino, J. Notholt, H. Ohyama, C. Petri, J. R. Podolske, D. F. Pollard, M. K. Sha, K. Shiomi, R. Sussmann, Y. Té, V. A. Velazco and T. Warneke, XCO2 retrieval for GOSAT and GOSAT-2 based on the FOCAL algorithm, Atmos. Meas. Tech., 14(5), 3837-3869, 2021, doi: rm10.5194/amt-14-3837-2021. URL https://amt.copernicus.org/articles/14/3837/2021/
Reuter, M., M. Buchwitz, O. Schneising, S. Noël, V. Rozanov, H. Bovensmann and J. P. Burrows, A fast atmospheric trace gas retrieval for hyperspectral instruments approximating multiple scattering - part 1: Radiative transfer and a potential OCO-2 XCO2 retrieval setup, Rem. Sens., 9(11), 1159, 2017a, ISSN 2072-4292, doi: rm10.3390/rs9111159. URL http://www.mdpi.com/2072-4292/9/11/1159
Reuter, M., M. Buchwitz, O. Schneising, S. Noël, H. Bovensmann and J. P. Burrows, A fast atmospheric trace gas retrieval for hyperspectral instruments approximating multiple scattering - part 2: Application to XCO2 retrievals from OCO-2, Rem. Sens., 9(11), 1102, 2017b, ISSN 2072-4292, doi: rm10.3390/rs9111102. URL http://www.mdpi.com/2072-4292/9/11/1102
Anthropogenic emissions from cities and power plants contribute significantly to air pollution and climate change. Their emission plumes are visible in satellite images of atmospheric trace gases (e.g. CO₂, CH₄, NO₂, CO and SO₂) and data-driven approaches are increasingly being used for quantifying the sources.
We present an open-source software library written in Python for detecting and quantifying emissions in satellite images. The library provides all processing steps from the pre-processing of the satellite images, the detection of the plumes, the quantification of emissions, to the extrapolation of individual estimates to annual emissions. The plume detection algorithm identifies regions in satellite images that are significantly enhanced above the background and assigns them to a list of potential sources such as cities, power plants or other facilities. Overlapping plumes are automatically detected and segmented. The plume shape is described by a set of polygons and a centerline along the plume ridge. Functions are available for converting geographic coordinates (longitude and latitude) to along- and across-plume coordinates. The emissions can be quantified using various data-driven methods such as computing cross-sectional fluxes or fitting a Gaussian plume model. The models can account for the decay of, for example, NO₂ downstream of the source. Furthermore, it is possible to fit two species simultaneously (e.g. CO₂ and NO₂) to constrain the shape of the CO₂ plume using NO₂ observation that typically have better accuracy. Annual emissions can be obtained by fitting a periodic C-spline to a time series of individual estimates.
A tutorial is available using Jupyter Notebooks to introduce the features of the library. Examples are demonstrated for Sentinel-5P NO₂ observations and for synthetic CO₂ and NO₂ satellite observations available for the CO2M satellite constellation. The library and its tutorial are available on Gitlab (https://gitlab.com/empa503/remote-sensing/ddeq) and can conveniently be installed using Python's package installer:
python -m pip install ddeq
The library is licensed under the "GNU Lesser General Public License" and can therefore be used in both open-source and proprietary software. Interested users are encouraged to contribute to the development of the library by reporting bugs, requesting or implementing new features and applying the library for detecting and quantifying emission plumes. If you are interested in contributing to the development of the software library, please contact the developers.
Methane is one of the most powerful greenhouse gases (GHG), having 84 more times warming potential than carbon dioxide. According to the last IPCC AR6 report, a strong, rapid and sustained reduction of GHG emissions would limit the warming effect and improve air quality. The 20% of the global methane emissions come from the fossil fuel industry. These have a direct implication in the global warming equivalent to 0.1ºC out of the 0.5º C globally attributed to methane.
TROPOMI, the TROpospheric Monitoring Instrument on board of the Sentinel-5P, can play a key role in tackling methane emissions in the largest oil and gas producing region of the United States, the Permian basin.
During the COVID-19 lockdown in 2020, TROPOMI was able to capture the reduction of maximum values of methane tropospheric concentrations in the two most productive sub-basins (Delaware and Midland) and the increase of the minimum and average values in both. With the latest changes in the algorithm, the methane retrievals from TROPOMI have improved, not only spatially but also temporally, increasing the spatial coverage of the Permian basin on a daily basis.
In order to illustrate the implications of the new algorithm in the use of TROPOMI for methane emissions, different cases showing plumes in the Permian basin have been studied with the support of Sentinel-2. Using the ratio between bands 12 and 11 on the day of the detected plume and the median of the scene from one month before and one month after, it has been possible to compare the new methane retrievals obtained with TROPOMI versus the retrievals obtained with Sentinel-2. The use of Sentinel-2 in the Permian basin, which is a difficult area in terms of source identification due to the crowdedness of O&G facilities, has also returned a diverse list of false methane retrievals obtained with the band ratios of Sentinel-2.
The comparation of the different possible sources detected with Sentinel-2 and the retrievals of TROPOMI, show a spatial relationship with the sources identified over flare stacks, which do not light on the day of the detected plume but usually doing it. Other sources identified, e.g., flare stacks lighting on the day of the identified plume were also located on the same area or in the surroundings, but in less quantity.
The improvement of the methane algorithm on TROPOMI will play a crucial role in the development of cost efficient LDAR (Leak Detection And Repair) activities, reducing the area of source location, increasing the time response and reducing the methane release to the atmosphere.
Using satellite data for estimating carbon dioxide (CO2) emissions from anthropogenic sources has become increasingly important since the Paris Agreement was adopted in 2015, due to their global coverage. The very first study that estimated CO2 emissions from individual power plants using satellite data was published in 2017 (Nassar et al., 2017). In recent years, the literature has been rapidly expanding with several new approaches and case studies. Many of the proposed techniques for estimating CO2 emissions from local sources are based on single satellite overpasses (e.g., Varon et al., 2018). To estimate nitrogen oxide (NOx) emissions from averaged NO2 columns, statistical methods (i.e., based on multiple spatially co-located observations) are often applied. In Europe, one of the key activities to respond to Paris Agreement’s goal to monitor anthropogenic CO2, is the Copernicus Carbon Dioxide Monitoring mission (CO2M).
In this work, we discuss the use of statistical methods for estimating the CO2 emissions. The advantage of the statistical methods is that they do not require complex atmospheric modeling, and they generally provide more robust emission estimates compared to individual satellite overpasses. In addition, these methods have been successfully applied to instruments and locations, where the individual plumes are not detectable, but the emission signal becomes visible when multiple scenes are averaged. In particular, we use divergence method, developed originally for NO2 by Beirle et al. (2019), to estimate CO2 emissions, from the synthetic SMARTCARB dataset (Kuhlmann et al., 2020) that has been created in order to prepare for the upcoming CO2M mission. We analyze the effect of different denoising techniques to the CO2 emission estimates. In addition, we estimate source specific NOx-to-CO2 emission ratio and discuss converting the estimated NOx emissions to CO2 emissions.
Methane is the world's second most important anthropogenic greenhouse gas. It is rising as a result of a variety of factors, including agriculture (e.g., livestock and rice production) and energy generation (mining and use of fuel). Some natural processes, such as the release of methane from natural wetlands, have also changed as a result of human intervention and climate change.
An important uncertainty in the modelling of methane emissions from natural wetlands is the wetland area. It is difficult to model because of several factors, including its spatial heterogeneity on a large range of scales. As we demonstrate using simulations spanning a large range in resolution, getting the spatiotemporal covariance between the variables that drive methane emissions right is critical for accurate emission quantification. This is done using a high-resolution wetland map (100x100m²) and soil carbon map (250x250m²) of the Fenno-Scandinavian Peninsula, in combination with a highly simplified CH₄ emission model that is coarsened in six steps from 0.005° to 1°.
We find a strong relation between wetland emissions and resolution (up to 12 times higher CH₄ emissions for high resolution compared to low resolution), which is sensitive, however, to the sub-grid treatment of the wetland fraction.
As soil moisture is likely to have strong controlling effects on temporal and spatial variability in CH₄ emissions from wetlands. We try to improve CH₄ emissions using high-resolution remote sensing soil moisture datasets, in comparison to modelled soil moisture datasets obtained from the global hydrological model PCR-GLOBWB (PCRG). FluxNet CH₄ observations for 9 selected sites spread over the northern hemisphere were used to validate our simplified model results over the period between 2015 and 2019. As we will show, realistic estimates can be obtained using a highly simplified representation of CH₄ emissions at a high resolution, which is a promising step for minimizing the significant uncertainties for the modelling of CH₄ emissions at local and regional scales.
The global growth rate of methane in the atmosphere shows large fluctuations, the explanation of which has been a major source of controversy in the scientific literature. The renewed methane increase after 2007 has been attributed to either natural or anthropogenic sources, with the latter either dominated by agricultural or fossil emissions. Interannual variability in the hydroxyl radical, the main atmospheric sink of methane, has also been proposed as the dominant driver of the temporary pause in the methane increase prior to 2007. The average of atmospheric methane over the past 5 years is the highest since its atmospheric measurements started in the mid 1980s, with record high growth in 2020 despite the pandemic. As a result, methane is by far the largest contributor to the departure of the path to the 2oC target. Again, the exact causes for this record high growth is up for discussion, where it is clear that also the role of OH needs to be considered.
This shows that atmospheric monitoring of methane is needed, but also that the current capabilities are still insufficient to provide conclusive answers about its global drivers. One of the ways to better address this is to try to better resolve the 3D distribution of methane in the atmosphere, realising that the sinks and sources have a different vertical distribution.
Methane has been measured successfully from space using both SWIR and TIR observations. Recently the TROPOMI instrument has made a huge step in SWIR observations from space, and IASI sensors have been providing TIR observations for over a decade now and will continue to do so. Inverse modeling trying to resolve global sources (and sometimes also sinks) using satellite data measurements have been done, mostly using the SWIR for methane. However, SWIR and TIR have a very different height sensitivity for methane in the atmosphere which in principle – combined - should provide us with a better-resolved 3D distribution of methane and thereby with a better handle on OH as well.
The ESA METHANE+ project aims at using both TROPOMI SWIR and IASI TIR measurements to better disentangle the sources and sink of methane. The project on one hand puts effort in improving the respective satellite data products, while on the other hand focuses on using both datasets in an inverse modeling framework. We will present an overview of the project.
FengYun-3D(FY-3D) is China's polar-orbiting meteorological satellite, launched in November 2017, and has upgraded with enhanced a new payload known as Greenhouse-gases Absorption Spectrometer (GAS) for monitoring CO2, CH4, CO, and N2O. The primary purpose of the GAS/FY-3D is to estimate emissions and absorptions of the greenhouse gases on a sub-continental scale (several thousand kilometers square) more accurately and to assist environmental administration in evaluating the carbon balance of the land ecosystem and making assessments of regional emissions and absorptions. GAS is an instrument that utilizes optical interference to get high spectral resolution of 0.2 cm-1. As a basic character of GAS, the signal to noise ratio (SNR), spectral response, and also the instrumental line shape (ILS) function has been tested on orbit for nearly 8 months in 2018. They all meet the requirements except the SNR in 0.76um band that are affected by micro-vibration effect on orbit.
As the column-averaged mole fraction XCO2 is:
XCO2=0.2095*(CO_2)/(O_2) (1)
CO_2 is the retrieved absolute CO2 column (in molecules/cm2),O_2 is the retrieved absolute O2 column (in molecules/cm2). 0.2095 is the assumed (column averaged) mole fraction of O2, which used to convert the O2 column into a corresponding dry air column.
As we known, Oxygen is an accurate proxy for the air column because its mole fraction and has negligibly small variations.For removing the influence O2-A band of GAS in retrieving XCO2, the absolute O2 column retrieved by GOAST are used to replace GAS. In order to analyze this uncertainty, the spatiotemporal interpolation results of GOSAT absolute O2 column are compared with OCO-2. Finally, the XCO2 retrieved by GAS are compared with TCCON.
We present our latest results towards the retrieval of methane (CH4) and carbon dioxide (CO2) concentrations on small (local) and large scales using short-wave infrared (SWIR) observations from airborne and space borne sensors.
The code developments are based on Py4CAtS (Python for Computational Atmospheric Spectroscopy), a Python reimplementation of GARLIC, the Generic Atmospheric Radiative Transfer Line-by-line Infrared Code coupled to BIRRA (Beer InfraRed Retrieval Algorithm). BIRRA-GARLIC has recently been validated with TCCON (Total Carbon Column Observing Network) and NDACC (Network for the Detection of Atmospheric Composition Change) ground based measurements.
The software suite BIRRA-Py4CAtS utilizes line data from latest spectroscopic databases such as the SEOM–IAS (Scientific Exploitation of Operational Missions–Improved Atmospheric Spectroscopy) and includes parameterization for Rayleigh and aerosol extinction. Moreover, the latest Py4CAtS version accounts for continuum absorption by means of collision induced absorption (CIA) and facilitates a wide variety of analytical and tabulated instrument spectral response functions. Current developments of the inverse solver BIRRA are directed towards the physical approximation of atmospheric scattering and co-retrieval of effective scattering parameters in order to account for light path modifications when estimating small scale CO2 or CH4 variations.
Methane retrieval results are shown for SWIR observations acquired on a local scale by an airborne HySpex sensor during the CoMet (CO2 and Methane, see Atm. Meas. Tech. special issue) campaign. The retrieval of carbon dioxide is assessed with GOSAT (Greenhouse Gases Observing Satellite) observations. Synthetic/simulated spectra are examined to study the sensitivity of various retrieval setups.
An increasing fleet of space-based, Earth-observing instruments provides coverage of the distribution of the greenhouse gas methane across a range of spatio-temporal scales. In this work, we focus on the synergy between TROPOMI and Sentinel-2 over active oil and gas production regions in Algeria. TROPOMI provides daily global coverage of methane at 7×5.5 km2 resolution, and one of its primary applications is to constrain the global methane distribution. Sentinel-2 provides global coverage every few days at 30m resolution, but with only a few broad spectral bands it is only able to inform on the largest methane point source signals.
In the TROPOMI data over eastern Algeria, large methane plume signals have been detected, most likely coming from large point sources. It is difficult to trace the source location of the plumes based only on TROPOMI, due to its comparatively coarse resolution. Instead, we employ the high-resolution Sentinel-2 data to trace the source locations of these super-emitters to a facility-level. The point source locations are then combined with a generic bottom-up emission inventory, and used as input for a TROPOMI inversion with the Weather Research and Forecasting (WRF) Model to estimate 2020 emissions from Algerian oil and gas fields. Thus, the Sentinel-2 data allows us to track down point source locations, while TROPOMI data provides the best integrated emission quantification of the entire region. In this novel approach, we show that we can optimize both point sources and more diffuse emissions in one systematic framework. In this way, we generate a full emission characterization of the region, in which we estimate the individual contribution of emissions from super-emitters and from diffuse emissions to the methane emission total. This unique information is highly valuable for developing efficient mitigation measures that target oil and gas methane emissions, and by extent their impact on the global climate.
CO2 (carbon dioxide) is the most important anthropogenic greenhouse gas and driving global climate change. Despite this, there are still large uncertainties in our understanding of anthropogenic and natural carbon fluxes to the atmosphere. Satellite observations of the Essential Climate Variable CO2 have the potential to significantly improve this situation. Therefore, a key objective of ESA’s GHG-CCI+ project is to further develop satellite retrieval algorithms needed to generate new high quality satellite-derived XCO2 (column-averaged dry-air mole fraction of atmospheric CO2) data products. One of these algorithms is the fast atmospheric trace gas retrieval FOCAL for OCO-2. FOCAL has been applied also to other satellite instruments (e.g., GOSAT and GOSAT-2) and its development is co-funded by EUMETSAT as it is a candidate algorithm to become one of the CO2M retrieval algorithms operated in EUMETSAT’s ground segment. Within our presentation, we will discuss the most recent retrieval developments incorporated in FOCAL-OCO2 v10 and present the corresponding improved XCO2 data product which is part of ESA’s GHG-CCI+ climate research data package 7 (CRDP7). The retrieval developments comprise a new cloud filtering technique by means of a random forest classifier, usage of a new CO2 a priori climatology, a new bias correction scheme using a random forest regressor, modifications of the radiative transfer, and others. The improved global data product exhibits an about three times higher data density and spans a time period of eight years (2014-2021). The results of a validation study using TCCON data will also be presented.
M.Reuter, M.Buchwitz, O.Schneising, S.Noel, V.Rozanov, H.Bovensmann and J.P.Burrows: A Fast Atmospheric Trace Gas Retrieval for Hyperspectral Instruments Approximating Multiple Scattering - Part 1: Radiative Transfer and a Potential OCO-2 XCO2 Retrieval Setup Remote Sensing, 9(11), 1159; doi:10.3390/rs9111159, 2017a
M.Reuter, M.Buchwitz, O.Schneising, S.Noel, H.Bovensmann and J.P.Burrows: A Fast Atmospheric Trace Gas Retrieval for Hyperspectral Instruments Approximating Multiple Scattering - Part 2: Application to XCO2 Retrievals from OCO-2 Remote Sensing, 9(11), 1102; doi:10.3390/rs9111102, 2017b
Noël, S., Reuter, M., Buchwitz, M., Borchardt, J., Hilker, M., Bovensmann, H., Burrows, J. P., Di Noia, A., Suto, H., Yoshida, Y., Buschmann, M., Deutscher, N. M., Feist, D. G., Griffith, D. W. T., Hase, F., Kivi, R., Morino, I., Notholt, J., Ohyama, H., Petri, C., Podolske, J. R., Pollard, D. F., Sha, M. K., Shiomi, K., Sussmann, R., Té, Y., Velazco, V. A., and Warneke, T.: XCO2 retrieval for GOSAT and GOSAT-2 based on the FOCAL algorithm Atmospheric Measurement Techniques, 14, 3837–3869, doi:10.5194/amt-14-3837-2021, 2021
To support the ambition of national and EU legislators to substantially lower greenhouse gas (GHG) emissions as ratified in the Paris Agreement on Climate Change, an observation-based "top-down" GHG monitoring system is needed to complement and support the legally binding "bottom-up" reporting in national inventories. For this purpose, the European Commission is establishing an operational anthropogenic GHG emissions Monitoring and Verification Support (MVS) capacity as part of its Copernicus Earth observation programme. A constellation of three CO2, NO2, and CH4 monitoring satellites (CO2M) will be at the core of this MVS system. The satellites, to be launched from 2026, will provide images of CO2, NO2, and CH4 at a resolution of about 2 km × 2 km along a 250-km wide swath. This will not only allow observing the large-scale distribution of the two most important GHGs (CO2 and CH4), but also capturing the plumes of individual large point sources and cities.
Emissions of point sources can be quantified from individual images using a plume detection algorithm followed by data-driven methods computing cross-sectional fluxes or fitting Gaussian plume models. To estimate annual emissions, a sufficiently large number of estimates is required to limit the uncertainty due to the temporal variability of emissions. However, the number of detectable plumes is limited, because the signal-to-noise ratio of individual plumes is too low or because neighboring plumes are overlapping. We present methods for increasing the number of plumes available for emission quantification using computer vision technqiues and improved data-driven methods that can estimate emissions from overlapping plumes.
Using synthetic data generated in the SMARTCARB project (Kuhlmann et al., 2020), we show that a joint denoising of coincident CO2 and NO2 images can result in signficantly improved signal-to-noise ratios for the individual images (notably, improving the peak signal-to-noise ratio of the CO2 images by +13 dB). Furthermore, by using a generative adverserial neural network approach, we show that it is possible to fill in missing data due to, e.g., cloud cover, with as an additional input wind direction information to steer the interpolation for the missing data. This ‘inpainting’ method helps the segmentation step, as it becomes possible to connect otherwise disjoint parts of a plume. Finally, we show how plume detection may be improved to be particularly receptive to plume-like features on satellite images (e.g., stretched out and narrow enhancements over the background) using a method referred to as Meijering.
A remaining challenge is to quantify the emissions from overlapping plumes, e.g., those occurring when one point source lies in the downwind direction of another plume, or when two diffusive plumes are positioned close to each other. We developed a data-driven approach using a multi-plume model that alleviates this problem. First, the approach obtains a best fitting center line for each of the individual plume sources, using effective wind data information and the multimodal distribution in the CO2 and NO2 images as inputs. Once such center lines are available, a cross-sectional flux method assuming a Gaussian cross-sectional structure can be computed for the multiple plume sources simultaneously. The upstream part of the plume (prior to overlapping) can be used to constrain the estimated fluxes. An alternative solution is to find best-fitting parameters for two or more Gaussian plume models simultaneously to estimate the emissions of each point source.
The improvements in the plume detection algorithm, and the multi-plume models for estimating emissions of overlapping plumes, increase the number of satellite images from which emission can be quantified. The larger number of emission estimates reduces the uncertainties in estimated annual emissions for point sources.
Methane (CH₄) is an important anthropogenic greenhouse gas and its rising concentration in the atmosphere contributes significantly to global warming. Satellite measurements of the column-averaged dry-air mole fraction of atmospheric methane, denoted as XCH₄, can be used to detect and quantify the emissions of methane sources. This is important since emissions from many methane sources have a high uncertainty and some emission sources are unknown. In addition, sufficiently accurate long-term satellite measurements provide information on emission trends and other characteristics of the sources, which can help to improve emission inventories and review policies to mitigate climate change.
The Sentinel-5 Precursor (S5P) satellite with the TROPOspheric Monitoring Instrument (TROPOMI) onboard was launched in October 2017 into a sun-synchronous orbit with an equator crossing time of 13:30. TROPOMI measures reflected solar radiation in different wavelength bands to generate various data products and combines daily global coverage with high spatial resolution. TROPOMI's observations in the shortwave infrared (SWIR) spectral range yield methane with a horizontal resolution of typically 7x7km².
We use a monthly XCH₄ data set (2018-2020) generated with the WFM-DOAS retrieval algorithm, developed at the University of Bremen, to detect locally enhanced methane concentrations originating from emission sources.
Our detection algorithm consists of several steps. At first, we apply a spatial high-pass filter to our data set to filter out the large-scale methane fluctuations. The resulting anomaly ∆XCH₄ maps show the difference of the local XCH₄ values compared to its surroundings. We then use these monthly maps to identify regions with local methane enhancements by utilizing different filter criteria, such as the number of months in which the local methane anomalies ∆XCH₄ of a possible hot spot region must exceed a certain threshold value. In the last step, we calculate some properties of the detected hot spot regions like the monthly averaged methane enhancement and attribute the hot spots to potential emission sources by comparing them with inventories of anthropogenic methane emissions.
In this presentation, the algorithm and initial results concerning the detection of local methane enhancements by spatially localized methane sources (e.g. wetlands, coal mining areas, oil and gas fields) are presented.
Anthropogenic greenhouse gas (GHG) emissions in the Eastern Mediterranean and Middle East (EMME) have increased fivefold over the last five decades. Emission rates in this region were ~3.4 GtCO2eq/yr during the 2010s, accounting for ~7% of the global anthropogenic GHG emissions. Among various GHGs emitted, methane (CH4) is of particular interest, given its stronger global warming potential relative to CO2 and the role of EMME as a key oil and gas producing region. Bottom-up inventories have reported that the anthropogenic CH4 emissions in EMME were ~22.0 Tg/yr in the 2010s, of which ~70% were contributed by oil and gas sectors. As inventory-based estimates often suffer from uncertainties in emission factors and activity statistics, independent budget estimation based on atmospheric observations, preferably at regional or national scales, are required to verify inventories and evaluate effectiveness of climate mitigation measures. Meanwhile, the availability of satellite CH4 observations in the recent decade (notably GOSAT XCH4 and TROPOMI XCH4) provides new opportunities to constrain CH4 emissions in this region previously underrepresented by ground-based observational networks. Here, we present a study of CH4 inverse modeling over EMME, using a Bayesian variational inversion system PYVAR-LMDz-SACS developed by LSCE, France with satellite XCH4 observations. The inversion system takes advantage of the dense XCH4 observations from space and the zooming capability of the atmospheric transport model LMDz to resolve CH4 emissions in EMME at a spatial resolution of ~50km. Instead of the default model settings for global CH4 inversions, we adapt the definition of error structure in the inversion system wherever necessary to address issues with ultra-emitters (which are common in the study region) at the high spatial resolution. The inversion results are evaluated against independent observations within and outside the study region from various platforms, and compared with emission inventories and other global or regional inversion products. With these datasets and modeling tools, we aim to assess the variations in CH4 emissions in EMME at the scales where decision-making and climate actions take place.
Globally, the oil, gas and coal sectors are the main emitters of anthropogenic methane (CH4) from fossil fuel sources. Together, these sectors represent one third of total global anthropogenic CH4 emissions. Despite a reduction in some basins due to the COVID-19 pandemic, global emissions from the oil and gas sector have rapidly increased over the last decades. This study presents results of global atmospheric inversions and compares them to national reports and other emission inventories. Results show that inversions tend to estimate higher CH4 emissions compared to national reports of oil-and-gas-producing countries like Russia, Kazakhstan, Turkmenistan and those located in the Arabic Peninsula. This difference might be partially explained since ultra-emitting events, consisting of large and sporadic emissions (greater than ≈ 20 tCH4 per hour), are not considered by emission inventories. Ultra-emitters are especially important in some countries, such as Kazakhstan and Turkmenistan, where estimated ultra-emitter emissions are comparable (1.4 Tg yr-1) to total fossil fuel emissions reported in their national inventories (1.5 Tg yr-1) and to half (on average) of the values reported in the other inventories that we analyzed. This study also considers emissions derived from regional inversions using S5P-TROPOMI atmospheric measurements at the scale of regional extraction basins for oil, gas and coal. Here, we assumed that those basins are already counted as part of the national CH4 budgets from in-situ-driven and GOSAT-driven inversions. Two coal basins, one in the USA and one in Australia, were considered. Also, six major oil and gas basins (3 in the USA, 2 in the Arabian Peninsula, and 1 in Iran) were considered as specific areas where many individual wells and storage facilities are concentrated. Averaged emissions (2019-2020) from the Bowen basin in Australia are greater than 2017 emissions estimated by inversions. For the USA, emissions from all basins analyzed, account for ~60% of total USA fossil fuel emissions estimated by inversions. For oil and gas, a basin encompassing four of the highest oil-producing fields in the world (comprising Iraq and Kuwait) represents ~38% of the total fossil emissions estimated by inversions for the Arabian Peninsula. Lastly, the basin estimation for Iran (2.5 TgCH4) represents ~68% of fossil fuel emissions from inversions and ~59% of independent inventories. Given the important role of the oil, gas and coal sectors to global anthropogenic emissions of CH4, our synthesis allows interpreting the main apparent differences between a large suite of recent emission estimates for these sectors.
The RAL Remote Sensing Group has developed an optimal estimation scheme to retrieve global height-resolved information on methane from IASI using the 7.9 µm band. This scheme uses pre-retrieved temperature, water vapour and surface spectral emissivity from the RAL Infrared Microwave Sounder (IMS) retrieval scheme, based on collocated data from IASI, MHS and AMSU. The IASI methane retrieval scheme has been used to reprocess the IASI MetOp-A record, producing a global 10-year v2.0 dataset (2007-17) (http://dx.doi.org/10.5285/f717a8ea622f495397f4e76f777349d1) and has also been applied to IASI on MetOp-B to extend the record to 2021.
While providing information on two independent vertical layers in the troposphere, sensitivity in the 7.9 µm band decreases towards the ground, due to decreasing thermal contrast between the atmosphere and surface. A combined scheme exploiting the high signal-to-noise information from Sentinel-5P (SWIR/column) with that from IASI MetOp-B (TIR/height-resolved) would enable lower tropospheric distributions of methane to be resolved. Lower tropospheric concentrations are more closely related to emission sources than are column measurements and inverse modelling of surface fluxes should be less sensitive to errors in representation of transport at higher altitudes; a limiting factor for current schemes.
Here we present findings from the IASI methane v2.0 dataset and introduce the RAL SWIR-TIR scheme, which combines Level 2 products from Sentinel 5P and IASI/CrIS to resolve lower tropospheric methane and carbon monoxide.
The Arctic and boreal regions have unique and poorly understood natural carbon cycles as well as increasing anthropogenic activities, e.g. from the oil and gas industry sector. The evolution of the high-latitude carbon sources and sinks would be most comprehensively observed by satelllites, in particular the planned Copernicus Anthropogenic CO2 Monitoring mission (CO2M). However, high latitudes pose significant challenges to reliable space-based observations of greenhouse gases. In addition to large solar zenith angles and frequent cloud coverage, snow-covered surfaces absorb strongly in the near-infrared wavelengths. Because of the resulting low radiances of the reflection measured by the satellite in nadir geometry, the retrievals over snow may be less reliable and are, for existing missions, typically filtered or flagged for potentially poor quality.
Snow surfaces are highly forward-scattering and therefore the traditional nadir-viewing geometries over land might not be optimal and instead the strongest signal could be attainable in glint-like geometries. In addition, the contributions from the 1.6 um and 2.0 um CO2 absorption bands need to be evaluated over snow. In this work, we examine the effects of a realistic, non-Lambertian surface reflection model of snow based on snow reflectance measurements on simulated top-of-atmosphere radiances in the wavelength bands of interest. The radiance simulations were carried out with various different viewing geometries, solar angles and snow surfaces. The effect of off-glint pointing was also investigated.
There are three main findings of the simulation study. Firstly, snow reflectivity varies greatly by snow type, but the forward reflection peak is present in all examined types. Secondly, glint observation mode was found to be more reflective than nadir observation mode over snow surfaces across all the examined wavelengths bands and geometries. Thirdly, the weak CO2 band had systematically greater radiances than the strong CO2 band which could indicate a greater significance in retrievals over snow.
ESA SNOWITE is a feasibility study funded by European Space Agency for examining how to improve satellite-based remote sensing of CO2 over snow-covered surfaces. It is a cooperative project between Finnish Meteorological Institute, Finnish Geospatial Research Institute and University of Leicester. The primary aim of the project is to support the development of the planned CO2M mission.
Satellite observations of greenhouse gases (GHG) are greatly enhanced when used in conjunction with ground-based sensor networks. By using clusters of spectroscopic instruments measuring GHG column abundances at locations along the satellite overpass, critical validation data for satellite GHG measurements can be provided.
In particular, missions such as NASA OCO-3 and the upcoming UKSA-CNES MicroCarb – which will provide measurements of CO2 over cities – would be aided by the presence of such ground based networks around and within urban areas. These would act as both validation sites for satellite GHG measurements, and as a long term measurement network of GHG column abundances, improving the understanding of carbon dynamics within urban areas.
However, there has previously existed a gap in the provision of such ground based networks, due to expense, infrastructure concerns, and the ability to provide autonomously acquired, high resolution data. This issue is exacerbated in areas of restricted or minimal site infrastructure, such as in city centres or remote site locations of interest e.g. peatlands, tropical forests. To fill this gap, the NERC Field Spectroscopy Facility (FSF) has developed the Spectral Atmospheric Suite (SAS), a suite of high resolution, portable and autonomous spectroscopic instruments which can be deployed by FSF as a cluster network, available for research communities in the UK and internationally.
The SAS consists of three discrete instrument “nodes”, which can be deployed individually or as part of a network cluster. Each instrument node consists of a Fourier Transform Infrared (FTIR) spectrometer (the EM27/SUN (Bruker GmbH, Germany), spectral range: 5,000 – 14,500 cm-1), measuring the column abundances of CO2, CH4 and CO; a 2D MAX-DOAS (the 2D SkySpec (AirYX GmbH, Germany), spectral range: 300 – 565 nm), measuring the slant column densities of a range of trace gases including NO2 and SO2; an automatic weather station (Vaisala, Finland) measuring meteorological parameters required for the retrievals of GHGs and trace gases; and a sun-sky-lunar sunphotometer (CIMEL Electronic, France), measuring aerosol optical thickness. Combined, each node represents an autonomous “miniature supersite”, capable of providing long term measurements as a ground based validation site for satellite measurements of GHGs and other trace gases. Each node is portable and has a low spatial footprint, allowing for easy deployment in areas of minimal or restricted infrastructure, such as city centres or remote wetland regions.
We present here an overview of the NERC FSF Spectral Atmospheric Suite and how, as part of its current deployment until 2022 with the University of Leicester’s London Carbon Emissions Experiment, it will provide a ground based validation site for upcoming missions such as UKSA-CNES MicroCarb.
The emissions of halocarbons have profoundly modified the chemical and radiative equilibrium of our atmosphere. These halogenated compounds are known to be powerful greenhouse gases and contribute, for chlorinated and fluorinated compounds, to the depletion of stratospheric ozone and to the development of the ozone hole. Their monitoring is therefore essential. The aim of this work is to assess the potential of infrared satellite sounders operating in the nadir geometry, to contribute to this monitoring and thereby to complement existing surface measurement networks.
This work is centered on the exploitation of the measurements from the infrared satellite sounder IASI. The instrument stability and the consistency between the different instruments on the successive Metop platforms (A, B and C) is remarkable and makes it a reference for climate monitoring. Among other things, IASI offers the potential to investigate trends in the atmospheric abundance of various species better than with any other hyperspectral IR sounder. The low noise of the IASI radiances is also such that even weakly absorbing halocarbons can be identified. Recently, we managed to detect the spectral signatures of eight halocarbons: CFC-11, CFC-12, HCFC-22, HCFC-142b, HFC-134a, CF4, SF6 and CCl4. In this work we exploit the 15 years record of continuous IASI measurements to give a first assessment of the trend evolution of these species. This is done by targeting various geographical areas on the globe and examining the remote oceanic and continental source regions separately. The trend evolution in the different chemical species, either negative or positive, is validated against what is observed with ground-based measurement networks and other remote sensors. We conclude by assessing the usefulness of IASI and follow-on missions to contribute to the global monitoring of halocarbons.
The paper presents the results of the CarbonCGI study proposed by ESA and carried out by Thales Alenia Space and partners for the observation of faint GHG source’s emissions with a high resolution Compact Gas Imager (CGI).
Atmospheric remote sensing from CGI allows observation of atmosphere features ranging from largest scales of meteorology and smaller spatial scales down to the finest scales allowing direct observations of biogenic and anthropogenic inter-actions with atmosphere. For this, multi-mission deployment is foreseen from geostationary to low orbit satellites as well as on airborne platform and on ground for mobile applications. CGI has the potential to acquire high resolution images of gas in the spectral regions of solar emission from UV to SWIR, and also to take images of atmospheric temperature and humidity profiles in TIR spectral bands.
This paper is focused on the detection and characterisation of Carbon dioxide and Methane gas concentrations, from low orbit satellite, for climate applications. CarbonCGI development includes simulation and experimental validation of level 0 (instrument design and acquisition chain), level 1 (data correction), level 2 (Radiative Transfer Model), and level 4 (Transport Model). CarbonCGI is developed in an integrated team of scientists and engineers. Knowledge in the field of physics of atmosphere from laboratories and scientific engineering institutes aim at designing the most efficient atmosphere remote sensor. Thus, the described CGI principle optimises the retrieval of atmospheric states from the spectra variability with acquisition of specific Partially Scanned Interferograms (PSI), resulting from a double optimisation of both spectral bands and Optical Path Difference range. The optical concept works at low aperture number, and provides very long dwell time, to reach unprecedented radiometric resolution with very low sounding precision and accuracy, in a very high spatial resolution image.
The paper presents the results obtained by applying the Performance Simulation Platform developed in the framework of the scientific chair TRACE https://trace.lsce.ipsl.fr to the CarbonCGI imaging and sounding performance. The obtained results highlight the capacity to carry out early mission trade-off from acquisition chain parameters.
The sounding performance obtained by coupling level 0-1 and level 2 models are described. After the presentation of level 0-2 models, this paper presents the sounding performance which have been achieved during the optimisation of the acquisition chain design. CGI instrument design delivers an inherent solution to correct for the presence of atmospheric aerosol up aerosol optical depths of 1. An optimised aerosol’s bias measurement concept and associated models and performance are presented.
Level 0-1 activity are then summarised with the presentation of the payload’s design, optical thermal and mechanical design, with an introduction to the CarbonCGI stray light model and the CarbonCGI Line of Sight Stabilisation system inferred from the ISABELA LS3 design, developed in the frame of the ISABELA TRP for ESA.
The paper concludes with a proposal of an incremental implementation plan of weak source measurement missions based on high resolution CarbonCGI imagers. The first step is a CarbonCGI instrument to complement CO2M mission with observation at higher spatial resolution and smaller swath, the second step is a self-standing high resolution observing system.
Greenhouse gas measurements by a Fourier Transform Spectrometer (FTS) were established at Sodankylä (67.4° N, 26.6° E) in early 2009 (Kivi and Heikkinen, 2016). The instrument records high-resolution solar spectra in the near-infrared spectral region. From the spectra we derive column-averaged, dry-air mole fractions of methane (XCH4), carbon dioxide (XCO2) and other gases. The instrument participates in the Total Carbon Column Observing Network (TCCON). Sodankylä is currently the only TCCON site in the Fennoscandia region. Our measurements have contributed to the validation of several satellite-based instruments. The relevant satellite missions include Sentinel-5 Precursor by ESA, the Orbiting Carbon Observatory-2 (OCO-2) by NASA (e.g., Wunch et al., 2017), the Greenhouse Gases Observing Satellite (GOSAT/GOSAT-2) mission by JAXA and the Chinese Carbon Dioxide Observation Satellite Mission (TanSat).
Comparisons with the GOSAT observations of XCH4 and XCO2 taken during years 2009-2020 show good agreement. Mean relative difference in XCH4 has been 0.04 ± 0.02 % and the mean relative difference in XCO2 has been 0.04 ± 0.01 %. We also performed a series of AirCore flights during each season in order to compare FTS retrieval results with the AirCore measurements. AirCore sampling system is directly related to the World Meteorological Organization in situ trace gas measurement scales. Thus the AirCore data can be used to calibrate remote sensing instruments. Our AirCore is a 100 m long coiled sampling tube with a volume of approximately 1400 ml. The sampler is lifted by a meteorological balloon typically up to about 30-35 km altitude and is filled during descent of the instrument from stratosphere down to the Earth's surface. Shortly after landing we have analyzed the sample using a cavity ring-down spectrometer. In addition to the balloon borne AirCore flights we also took measurements of methane and carbon dioxide at a 50 meter tower and by a drone borne AirCore instrument in the vicinity of the FTS site.
Kivi, R. and Heikkinen, P.: Fourier transform spectrometer measurements of column CO2 at Sodankylä, Finland, Geosci. Instrum. Method. Data Syst., 5, 271–279, https://doi.org/10.5194/gi-5-271-2016, 2016.
Wunch, D., et al., Comparisons of the Orbiting Carbon Observatory-2 (OCO-2) XCO2 measurements with TCCON, Atmos. Meas. Tech., 10, 2209-2238, https://doi.org/10.5194/amt-10-2209-2017, 2017.
The Arctic Observing Mission (AOM) is a satellite mission concept that would use a highly elliptical orbit (HEO) to enable frequent observations of greenhouse gases (GHGs), air quality, meteorological variables and space weather to address the current sparsity in spatial and temporal coverage north of the usable viewing range of geostationary (GEO) satellites. AOM evolved from the Atmospheric Imaging Mission for Northern Regions (AIM-North), which was expanded in scope. AOM would use an Imaging Fourier Transform Spectrometer (IFTS) with 4 near infrared/shortwave infrared (NIR/SWIR) bands to observe hourly CO2, CH4, CO and Solar Induced Fluorescence spanning cloud-free land from ~40-80°N. The rapid revisit is only possible due to cloud avoidance using ‘intelligent pointing’, which is facilitated by the availability of real-time cloud data from the meteorological imager and the IFTS scanning approach. Simulations suggest that these observations would improve our ability to detect and monitor changes in the Arctic and boreal carbon cycle, including CO2 and CH4 emissions from permafrost thaw, or changes to northern vegetation carbon fluxes under a changing climate. AOM is envisioned as a Canadian-led mission to be implemented with international partners. AOM is currently undergoing a pre-formulation study to refine options for the mission architecture and advance other technical and design aspects, investigate socio-economic benefits of the mission and better establish the roles and contributions of partners. This presentation will give an overview of the AOM GHG instrument, its expected capabilities and its potential for carbon cycle science and monitoring.
A large and fast view of volcanic plumes as detection and measurement of volatiles components exolving from craters is possible by using hyperspectral remote sensing if their absorption bands are in the sensor spectral range. In the present study the developed algorithm to calculate CO2 columnar abundance in tropospheric volcanic plume is presented. The algorithm is based on a modified CIBR 'Continuum Interpolated Band Ratio' remote sensing technique based on a differential absorption technique that was initially developed to calculate water vapor columnar abundance. The retrieval techniques exploit spectroscopy measurements by analysing gases absorptions features in the SWIR (Short Wave InfraRed) spectral range, in particular the Carbon Dioxide absorption in the spectral range of 2 microns. Specifically, PRISMA (PRecursore IperSpettrale della Missione Applicativa) acquisition data are used for gases retrieval purposes. The PRISMA space mission was launched by the Italian Space Agency (ASI) on March 22, 2019; the on-board spectrometers are able to measure in two spectral range: the VNIR (0.4-1.0 µm) and SWIR (0.9-2.5 µm) spectral ranges, with a ground spatial resolution of 30 m. In this study, the inversion techniques is applied to PRISMA data in order to derive the PRISMA performances for CO2 detection and retrieval. Simulations of the “Top Of Atmosphere (TOA)” radiance have been performed by using real input data to reproduce the scene acquired by PRISMA over a volcanic point sources: actual atmospheric background of CO2 (~400 ppm) and vertical atmospheric profiles of pressure, temperature and humidity obtained from probe balloons has been used in the radiative transfer model. The results will be shown in the considered test sites of Campi Flegrei caldera in the Campania region (located in southern Italy) and Lusi volcanoes (located in Java island region of Indonesia) both characterized by a persistent degassing plume present even if they show a very different mechanism of volcanic emissions: the first based on a hydrothermal system and the second based on a mud cold mechanism of volcanic emission of gases in troposphere.
Satellite observations of carbon dioxide have recently matured to the level where they can be used to estimate anthropogenic CO2 emissions of large power plants and other point sources. The true added value of these observations is gained specifically over regions that are otherwise not measured or where the reported emission inventories may be defective. However, satellite observations of CO2 are sensitive to other atmospheric pollutants, specifically aerosol particles that affect the path length of radiation through scattering and absorption. For further complication, these particles are often co-emitted with anthropogenic CO2 emissions. The impact of aerosols on CO2 retrievals can be considered to some extent in the retrieval process and post-processing bias correction. Still, little attention has been dedicated to the evaluation of CO2 retrievals in high aerosol loadings that are characteristic to megacity environments and other regions with persistently poor air quality and high aerosol optical depth (AOD).
In this work we present two approaches to investigate potential aerosol effects on OCO-2 XCO2 observations. To obtain global statistics, a co-located database for OCO-2 XCO2 (OCO-2 v10r) and MODIS Aqua AOD (L2, 10km Dark Target) is created. For each OCO-2 pixel the corresponding MODIS AOD value was defined from the nearest good quality MODIS observation that was found within 0.2 deg. lat., lon. distance from the XCO2 observation. The dataset consists of 5 years of observations between 2015 and 2019. This unique global dataset enables the investigation of large scale variation patterns, regional dependencies and allow also to identify potentially interesting areas for more detailed study. In the local scale approach, the aerosol effects are studied in the vicinity of urban TCCON stations, that also have an operating AERONET or other sunphotometer station close by. To investigate the spatial patterns, L2 MODIS Aqua 3km Dark Target AOD is analysed together with L2 OCO-2 XCO2. Hence, in this approach for both XCO2 and AOD, a ground-based reference measurement can be obtained in addition to the satellite observations. Also, aerosol vertical profiles from Calipso will be analysed if an overpass over the study area is obtained. With this combination of observations the potential risks for aerosol induced biases in a city scale can be assessed in detail.
This research will lay important ground work to the planned Copernicus Anthropogenic CO2 Monitoring Mission where the ultimate purpose is to support the goals of the Paris Agreement with independent emission estimates derived from satellite observations. For this purpose, it is crucial to investigate the validation of CO2 observations in urban, high AOD environments and establish the current state of the art and gaps in both retrievals and validation.
Methane is the most important anthropogenic greenhouse gas after carbon dioxide. In fact, it is responsable for about one quarter of the climate warming experienced since preindustrial times. A considerable amount of these emissions comes from methane point-sources, typically linked to fuel production installations. Thus, detection and elimination of these emissions represents a key means to reduce the concentration of greenhouse gases in the atmosphere.
A functional global monitoring of methane emissions is possible because of satellites, which capture the upwelling radiance at the top-of-atmosphere level in different spectral bands. One example of this technology is the Sentinel-5P TROPOMI mission which monitors methane emissions at a global scale and daily revisit. However, its relatively low spatial resolution cannot pinpoint methane point-source emissions with high accuracy. In contrast, the Italian PRISMA mission presents a lower temporal revisit but a larger spatial resolution of 30 m and measures the top-of-atmosphere radiance in the 400–2500 nm spectral range, where significant methane absorption features are well characterized. Therefore, PRISMA mission can largely complement the capabilities of TROPOMI for the detection and quantification of methane at a global scale.
In this study, different methodologies for point-source methane retrieval detection and quantification using PRISMA data have been reviewed in order to determine the most accurate procedure. The review goes from multitemporal methods that compare data from different days with methane emission to days with no emission to target detection algorithms such as the simple matched-filter based algorithm applied to the ~2300 nm methane absorption window in shortwave infrared spectral region. The accuracy of the different methodologies has been assessed under different scenarios that consider the most relevant error sources in the retrieval such as the surface brightness and homogeneity. This assessment has flagged the main areas of potential improvement of the retrieval methodologies and, consequently, several techniques have been developed that include the detection of false positives (e.g. the identification of plastic and hydrocarbons) and the minimisation of the surface heterogeneity impact.
In recent years at ECMWF, a series of projects were carried out focusing on developments towards direct assimilation and monitoring to exploit space-borne cloud radar and lidar data for Numerical Weather Prediction (NWP) models. Although active observations from such profiling instruments contain a wealth of information on the structure of clouds and precipitation, they have never been assimilated directly in any global NWP model.
To prepare the data assimilation system for the new observations of cloud radar reflectivity and lidar backscatter, several important developments were required. This included the specification of sufficiently accurate observation operators (i.e. models providing equivalent model fields to observations), as well as defining flow-dependent observation errors, and appropriate quality control strategy and bias correction scheme. The feasibility of assimilating CloudSat and CALIPSO data, currently the only available data from space-borne radar and lidar with global coverage, into the Four-Dimensional Variational (4D-Var) data assimilation system used at ECMWF has been investigated. Including cloud radar reflectivity and lidar backscatter in the assimilation system had a positive impact on both the analysis and the subsequent short-term forecast. By running experiments for different seasons and combining them to increase statistical significance lead to promising results; improvements to the zonal mean forecast skill score in the short- and medium-ranges for large-scale variables were found almost anywhere, with the largest impact on storm-tracks and in the tropics.
The performed studies using CloudSat and CALIPSO observations prepared grounds for assimilation of such observation types from the future EarthCARE mission. Additionally, the system developments will facilitate the monitoring of observations both in an operational sense and for model evaluation as soon as observations become available after the mission launch. By using a monitoring system that combines information from observations and model, a statistically significant drift in the measurements can be detected faster than monitoring observations alone. Also the monitoring system allows validation of the observations along the whole EarthCARE track.
Daytime Polarization Calibration Using Solar Background Signal Scattered from Dense Cirrus Clouds in the Visible and Ultraviolet Wavelength Regime
Zhaoyan Liu, Pengwang Zhai, Shan Zeng, Mark Vaughan, Sharon Rodier, Xiaomei Lu, Yongxiang Hu, Charles Trepte, and David Winker
In this presentation we describe the application of a previously developed technique that is now being used to correct the daytime polarization calibration of the CALIPSO lidar [1]. The technique leverages the fact that the CALIOP solar radiation background signals measured above dense cirrus clouds are largely unpolarized [2] due to the internal multiple reflections within the non-spherical ice particles and the multiple scattering that occurs among these particles. Therefore, the ratio of polarization components of the cirrus background signals provides a good estimate for the polarization gain ratio (PGR) of the lidar. Using airborne backscatter lidar measurements, this technique was demonstrated to work well in the infrared regime where the contribution from the molecular scattering between dense clouds is negligible. However, in the visible and ultraviolet regime, the molecular contribution is too large to be ignored, and thus corrections must be applied to account for the highly polarizing characteristics of the molecular scattering. Ignoring these molecular scattering contributions can cause PGR errors of 2-3% at 532 nm, where the CALIPSO lidar makes its depolarization measurement. Because of the wavelength dependence of -4 of the molecular scattering, the PGR error can be even larger at the 355 nm wavelength that will be used by ESA’s EarthCARE lidar. To estimate the molecular scattering contributions to the lidar received solar background signal, a look-up table has been created using a polarization-sensitive radiative transfer model [3]. This presentation describes the theory and implementation of the molecular scattering correction, demonstrates the application of the calibration technique, and compares the results to CALIOP daytime PGR estimates derived using an onboard pseudo-depolarizer [4]. We also present the simulation results at 355 nm at the symposium.
References:
1. Z. Liu, M. McGill, Y. Hu, C. Hostetler, M. Vaughan, and D. Winker, “Validating lidar depolarization calibration using solar radiation scattered by ice clouds”, IEEE Geos. and Remote Sensing Lett., 1, 157-161, 2004.
2. K. N. Liou, Y. Takano, and P. Yang et al., “Light scattering and radiative transfer in ice crystal clouds: applications to climate research,” in Light Scattering by Nonspherical Particles, M. Mishchenko et al., Eds. San Diego, CA: Academic, 2000, pp. 417–449.
3. P. Zhai, Y. Hu, J. Chowdhary, C. R. Trepte, P. L. Lucker, D. B. Josset, “A vector radiative transfer model for coupled atmosphere and ocean systems with a rough interface”, Journal of Quantitative Spectroscopy and Radiative Transfer, 111, 1025-1040, 2010.
4. J. P. McGuire and R. A. Chapman, “Analysis of spatial pseudo depolarizers in imaging systems,” Opt. Eng., vol. 29, pp. 1478–1484, 1990.
The Earth Cloud, Aerosols and Radiation Explorer (EarthCARE) is a joint mission of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA). The mission objectives are to improve the understanding of the cloud-aerosol-radiation interactions by acquiring vertical profiles of clouds and aerosols simultaneously with radiance and flux observations for their better representation in numerical atmospheric models
The operational EarthCARE L2 product on top-of-atmosphere (TOA) radiative fluxes is based on a radiance-to-flux conversion algorithm fed mainly by unfiltered broad-band radiances from the BBR instrument, and auxiliary data from EarthCARE L2 cloud products and modelled geophysical databases. The conversion algorithm models the angular distribution of the reflected solar radiation and thermal radiation emitted by the Earth-Atmosphere system, and returns flux estimates to be used for the radiative closure assessment of the Mission.
Different methods are employed for the solar and thermal BBR flux retrieval models. Models for SW radiances are created for different scene types and constructed from Clouds and the Earth’s Radiant Energy System (CERES) data using a feed-forward back-propagation artificial neural network (ANN) technique. LW models are based on correlations between BBR radiance field anisotropy and the spectral information provided by the narrow-band radiances of the imager instrument on-board. Both retrieval algorithms exploit the multi-viewing capability of the BBR (forward, nadir and backward observations of the same target) co-registering radiances and providing flux estimates for every view and checking their integrity before being combined into the optimal flux of the observed target. The reference height where the three BBR measurements are co-registered corresponds to the height where most reflection or emission takes place and depends on the spectral regime. LW observations are co-registered at the cloud top height, but the most radiatively significant height level on SW radiances is very dependent on the cloud. This reference height is instead selected by minimizing the flux differences between nadir, fore and aft fluxes.
The study presented here shows an evaluation of the BBR radiance-to-flux conversion algorithms using scenes from the Environment Canada and Climate Change’s Global Environmental Multiscale (GEM) model. The EarthCARE L2 team has simulated three EarthCARE frames (1/8 of orbit) running a radiative transfer code optimized for the EarthCARE instrument models over the GEM scenes. The test scenes resulting include synthetic L1 EarthCARE data that have been used by the different L2 teams to test and develop L2 products for testing the end-to-end-processor chaining. The test scenes collect data for a ~6200 x 150 km swath, with 1 km along-track sampling, of a simulated EarthCARE orbit. The “Halifax” scene corresponds to an orbit crossing the Atlantic Ocean and Canada in December 7, 2015. This case includes Sun just below the horizon over Greenland, cold air over Labrador, a cold-front near Halifax, dense overcast south of Halifax, and scattered shallow convection south of Bermuda. The “Baja” scene corresponds to an orbit crossing Canada and USA in April 2, 2015. This case includes clear and cold conditions at the northern extremity, scattered cloud through the Canadian Prairies, overcast over the Rocky Mountains, clear through Utah, and cirrus in Arizona and Mexico. The “Hawaii” scene corresponds to an orbit going through the Pacific Ocean and passing near the Hawaiian Islands in June 23rd 2014.
The BBR solar and thermal flux retrieval algorithms were successfully employed to retrieve radiative fluxes over the test scenes. The approach followed to evaluate the flux retrieval algorithms includes both testing the model performance with L2 products directly derived from the geophysical properties included in the GEM simulations (ideal case, no dependence on L2 retrievals analysed) and testing the model performance with L2 products derived from the EarthCARE L2 cloud and radiance processors (operational case, dependence on L2 Lidar and Imager cloud algorithms and L2 radiance unfiltering algorithm analysed). These two exercises allow evaluating discrepancies between retrieved and simulated fluxes, and assessing the sensitivity of the flux retrieval models to uncertainties in the cloud and radiance retrievals over a huge variety of realistic samples in three different scenes.
Clouds warm the surface in the longwave (LW) and this warming effect can be quantified through the surface LW cloud radiative effect (CRE). The global surface LW CRE is estimated using long-term observations from space-based radiometers (2000–2021) but has some bias over continents and icy surfaces. It is also estimated globally using the combination of radar, lidar and space-based radiometer over the 5–year period ending in 2011. To develop a more reliable long time series of surface LW CRE over continental and icy surfaces, we propose new estimates of the global surface LW CRE from space-based lidar observations. We show from 1D atmospheric column radiative transfer calculations, that surface LW CRE linearly decreases with increasing cloud altitude. These computations allow us to establish simple relationships between surface LW CRE, and five cloud properties that are well observed by the CALIPSO space-based lidar: opaque cloud cover and altitude, and thin cloud cover, altitude, and emissivity. We use these relationships to retrieve the surface LW CRE at global scale over the 2008–2020 time period (27 Wm-2). We evaluate this new surface LW CRE product by comparing it to existing satellite-derived products globally on instantaneous collocated data at footprint scale and on global averages, as well as to ground-based observations at specific locations. Our estimate appears to be an improvement over others as it appropriately capture the surface LW CRE annual variability over bright polar surfaces and it provides a dataset of more than 13 years long.
After presenting the principle of the algorithm used to retrieve the surface LW CRE from lidar observations only and the validation of the retrieval, we will 1) describe the modification needed for this CALIPSO algorithm to run on ATLID/EarthCare Level 1 data 2) explain the complementarity of this lidar-only surface LW CRE estimate with ATLID L2 ESA radiative product 3) show examples of science applications using the product build from CALIPSO data and describe the benefit for science of extending this record by applying this algorithm to ATLID Level 1 data.
The ESA cloud, aerosol and radiation mission EarthCARE will provide active profiling and passive imaging measurements from a single satellite platform. This will make it possible to extend the products obtained from the combined active/passive observations along the ground track into the swath by means of active/passive sensor synergy, to estimate the 3D fields of clouds and to assess radiative closure. The backscatter lidar (ATLID) and cloud profiling radar (CPR) will provide vertical profiles of cloud and aerosol parameters with high spatial resolution. Complementing these active measurements, the passive multi-spectral imager (MSI) delivers visible and infrared images for a swath width of 150 km and a pixel size of 500 m. MSI observations will be used to extend the spatially limited along-track coverage of products obtained from the active sensors into the across-track direction. In order to support algorithm development and to quantify the effect of different instrument configurations on the mission performance, an instrument simulator (ECSIM) has been developed for the EarthCARE mission. ECSIM is an end-to-end simulator capable to simulate all four instruments for complex realistic scenes. Specific ECSIM test scenes have been created from weather forecast model output data. The 6000 km long frames include clouds over the Greenland ice sheet, followed by high optical thick clouds, a high ice cloud regime as well as low-level cumulus cloud embedded in a marine aerosol layer below and an elevated intense dust layer above. These synthetic scenes make it possible to evaluate and intercompare the different cloud properties from active and passive sensors such as cloud liquid water path or cloud effective radius. Further the input of the synthetic scenes offer the opportunity to extract the extinction profiles for each MSI pixel, and to contrast them to the retrieved cloud properties and types. This approach can be used to better understand and quantify the differences between the retrieved cloud properties based on the different measurement principles (passive and active). For example, the cloud top height retrieved from MSI is an effective height of infrared emission located within the cloud, and it is important to quantify differences to the geometric cloud top height to constrain the longwave cloud radiative effect. Another quantity of interest is the cloud effective radius from CPR, which is most sensitive to large particles in the clouds, while MSI is only sensitive to very small particles on the top of the cloud. The goal is to understand the differences of the cloud products from CPR, ATLID and MSI by comparison to the reference fields to enable a consistent comparison.
The EarthCARE (Earth Clouds, Aerosol and Radiation Explorer) mission will be equipped with four co-located instruments (three from ESA and one provided by JAXA) to derived information related to aerosols/clouds/radiation and their interaction through the processing of single instrument data as well as synergistic products.
The Payload Data Ground Segment (PDGS) is the component of the overall EarthCARE Ground Segment in charge of receiving the housekeeping telemetry and instrument source packets via X-Band from the satellite, processing the packets in order to generate different product levels and disseminating them to users in few hours from sensing. The main products include level 0 (corrected and time sorted packets), level 1B (instrument science data calibrated and geolocated), level 2A (geophysical parameters derived from a single instrument) and level 2B or synergistic products (geophysical parameters derived by merging information of several EarthCARE instruments). The PDGS is also in charge of the routine calibration and monitoring of the three ESA instruments, of products quality control as well as planning of payload operations The EarthCARE PDGS consist of several components called facilities. Although the general architecture is similar to other PDGS developed by ESA for Earth Observation Missions, some evolutions were required to take into account some EarthCARE specific aspects.. In particular, the synergistic nature of the mission results in a complex processing model which involves about 30 different processors. In order to streamline the integration of this large number of processors and in anticipation of initially frequent updates, a formal modelling of the processing chain has been introduced to support automatic configuration of the processing facility. In addition, a new facility called the Level 2 TestBed has been included in the PDGS in order to allow processor developers to test their code in quasi operational conditions and in an autonomous way including the possibility to upload new processor versions without assistance from PDGS operators. The presence of a Japanese Instrument on-board also imposes tight dependencies between the ESA and JAXA components in terms of processing as well as in terms of payload planning.
This poster presents the functional and architectural breakdown of the PDGS, external interfaces including FOS (Flight Operation Segment), ECMWF and JAXA. It details the main design drivers including data latency, production model, data volumes, network bandwidth as well as interfaces with end users. The current integration status of the PDGS and its underlying facility is also presented.
The Hybrid End-To-End Aerosol Classification model (HETEAC) [1] has been developed for the upcoming EarthCARE mission [2]. This aerosol classification model is based on a combined experimental and theoretical (hybrid) approach and allows the simulation of aerosol properties, from microphysical to optical and radiative parameters of predefined aerosol types (end-to-end). In order to validate HETEAC, an aerosol typing scheme applicable to both ground-based and spaceborne lidar systems has been developed.
This novel aerosol typing scheme, based on HETEAC, applies the optimal estimation method (OEM) to a combination of lidar-derived intensive aerosol properties (i.e., concentration-independent), to determine the statistically most-likely contribution of aerosol component to the observed aerosol mixture, weighted against a priori knowledge of the system. The aerosol components considered to contribute to an aerosol mixture are four, namely fine, spherical, absorbing (FSA); fine, spherical, non-absorbing (FSNA); coarse, spherical (CS); and coarse, non-spherical (CNS). These four components have been selected from lidar-based experimental data set at 355, 532 and 1064 nm. Their optical and microphysical properties serve as a priori for the retrieval scheme and are in accordance with the ones used in the original HETEAC model, in order to ensure meaningful comparisons. In contrast to HETEAC, which is limited to observations at 355 nm only, the novel typing scheme is flexible in terms of input parameters and can be extended to other wavelengths to exploit the full potential of ground-based multiwavelength-Raman-polarization lidars and thus reduce the ambiguity in aerosol typing. It is thus an algorithm, able to be applied to EarthCARE but also to other lidar systems providing other or more optical products.
The initial guess of the aerosol components contribution that is needed to kick-of the retrieval scheme is the outcome of a decision tree. Using this initial guess, the lidar ratio (355 and 532 nm), particle linear depolarization ratio (355 and 532 nm), extinction-related Ångström exponent and backscatter-related color ratio (at the 532/1064 nm wavelength pair) are calculated (forward model). The final product is the contribution of the four aforementioned aerosol components to an aerosol mixture in terms of relative volume. Once this product meets certain quality assurance flags, it can be used to provide additional products: (a) aerosol component separated backscatter and extinction profiles, (b) aerosol optical depth per aerosol component, (c) volume concentration per component, (d) number concentration per component, (e) effective radius of the observed mixture and (f) refractive index of the mixture.
In this presentation, the aerosol typing scheme will be discussed in detail and it will be applied to several case studies. The application of the algorithm to different atmospheric load scenarios will demonstrate the algorithm’s strengths and limitations. In addition, first results between the HETEAC and OEM comparison will be presented.
References
[1] Wandinger, Ulla, et al., 2016: "HETEAC: The Aerosol Classification Model for EarthCARE." EPJ Web of Conferences. Vol. 119. EDP Sciences.
[2] llingworth, A., et al., 2014: THE EARTHCARE SATELLITE: The next step forward in global measurements of clouds, aerosols, precipitation and radiation. Bull. Am. Met. Soc., doi:10.1175/BAMS-D-12-00227.1.
The Broad-Band Radiometer (BBR) instrument on the future EarthCARE satellite (to be launched in 2023) will provide accurate outgoing solar and thermal radiances at the Top of the Atmosphere (TOA) obtained in an along-track configuration in three fixed viewing directions (fore, nadir and aft).
The BBR will measure radiances filtered by the spectral response of the instrument in two broad-band spectral channels; SW (0.25 to 4µm) and TW (0.25 to > 50µm). These radiances need to be corrected in the unfiltering process in order to reduce the effect of a limited and non-uniform spectral response of the instrument.
The unfiltering parametrization is based on a large simulated database of fine spectral resolution SW and LW radiances convolved with the spectral responses of the BBR channels. In practice, the SW and TW measurements of the BBR must be converted into solar and thermal (unfiltered) radiances. First, the LW radiance is estimated from the SW and TW measurements. Secondly, the inter-channel contaminations, i.e., the parts of the LW signal due to reflected solar radiation and of the SW signal due to planetary radiation, are accounted for. Finally, multiplicative factors are computed in order to estimate the unfiltered solar and thermal radiances from the SW and LW channels, respectively.
Regarding the algorithm, two unfiltering algorithms have been developed for the SW: stand-alone and MSI-based, and one stand-alone for the LW. The stand-alone algorithms aim to enable the unfiltering of the BBR if the MSI measurements are unavailable or degraded and it is done according to the measured broadband radiances and land use classification. The MSI-based algorithm makes use of the MSI cloud mask and cloud phase in the unfiltering process.
The study presented here shows an evaluation of the BBR unfiltered radiance estimation using the three synthetic test scenes (Halifax, Baja and Hawaii) created by the EarthCARE team from the Environment Canada and Climate Change’s Global Environmental Multiscale (GEM) model and radiative transfer data derived from them.
It is worth noting that the unfiltering is a crucial part in the BBR processing, as errors in the unfiltering will be propagated to the flux (BMA-FLX product). To this end, the unfiltering performances have been confirmed not only using the test scenes (RMSE ~ 0.5 W m-2 sr-1 for SW and LW) but also using an independent validation database for both SW and LW (RMSE < 1 Wm-2sr-1 for the SW and < 0.2 W m-2 sr-1 for the LW).
With the combination of two active instruments, a cloud radar and a high spectral resolution lidar, and a set of passive instruments, the ESA/JAXA EarthCARE mission will be the most complex satellite for aerosol, cloud and radiation measurements from space. With its so-called NARVAL payload, the German high altitude and long-range aircraft HALO is equipped with similar instruments as the upcoming satellite experiment. Having the same or similar payload on an aircraft provides the opportunity to apply and test algorithms, to investigate constraints of the future satellite mission, and to develop strategies for and perform validation studies.
Since 2013, the EarthCARE-like payload (HSRL at 532 nm with polarization sensitive channels at 532 nm and 1064 nm, Ka-band radar with 30 kW peak power, hyper-spectral radiometer, and microwave radiometer) on HALO was deployed during a number of six flight experiments and thus collected a large number of measurements that are currently used to prepare for the upcoming satellite mission. The measurements were performed at different locations including the tropical and sub-tropical North-Atlantic region up to the extra-tropical North-Atlantic and the European Mid-Latitudes. We used these measurements for comparison studies of current satellite measurements, airborne measurements and simulations, and process studies with the advantage of a much higher spatial resolution and/or sensitivity compared to the future space borne measurements. In this context, we investigated the benefit and constrains of the upcoming satellite mission and studied the effect of instrument resolution and sensitivity on the derived properties. With the combination of remote sensing measurements and airborne in-situ measurements we validated satellite retrievals by directly comparing retrieval output with measured properties. Looking ahead, we furthermore developed an elaborated proposal for an upcoming validation study addressing different locations and aspects of validation.
In this presentation we will give an overview of our EarthCARE preparation studies and their main results. We address different stages and aspects of satellite preparation; from the development of new strategies and methods, to sensitivity tests and finally towards the investigation of retrievals. By summarizing our lessons learned we will consolidate our insights which helped to shape ideas for a future validation campaign.
The synergy of radar and lidar from ground-based networks such as ARM and CloudNet/ACTRIS and the A-Train constellation of satellites has revolutionised our understanding of the global and vertical distribution of clouds and precipitation. However, while the complementary sensitivities of lidar to small ice crystals and radar to larger snowflakes can provide near-complete coverage of ice clouds and snow, the detection and vertical location of liquid cloud is much less certain. In mixed-phase, layered or precipitating cloud scenes the lidar is often quickly extinguished within the first layer, and while the radar penetrates most scenes its signal is dominated by larger precipitating hydrometeors. We use simulated EarthCARE measurements of midlatitude and tropical cloud scenes from a numerical weather model to show that these synergistic blind spots result in less than 25% of liquid clouds being detected by volume, representing only around 10% of total liquid water content.
As well as biasing global liquid cloud statistics and water budgets from spaceborne active remote sensing, these undiagnosed clouds cannot be ignored from a radiative perspective. In this study we use simulated EarthCARE measurements to evaluate the performance of EarthCARE’s synergistic retrieval of cloud and precipitation (ACM-CAP), which will assimilate a solar radiance channel from EarthCARE’s multi-spectral imager (MSI) as well as the cloud profiling radar (CPR) and atmospheric lidar (ATLID). We show that assuming that liquid clouds are collocated with precipitation improves the forward-modelled solar albedo in many complex cloud scenes. Even without active measurements of liquid cloud, the solar radiance and CPR path-integrated attenuation are sufficient to constrain the retrieval of a simplified profile of liquid water content, which reduces underestimates in retrieved liquid water path without introducing a significant compensating error. When the profiling retrievals at nadir and MSI imagery are used to reconstruct a 3D across-swath scene (ACM-3D), the missing liquid contributes to a mean bias error of almost 40 gm-2 with respect to the model fields, compared to around -5 gm-2 when liquid is included in the synergistic retrieval constrained by solar radiances. Finally the radiative closure assessment (ACMB-DF) against EarthCARE’s broadband radiometer (BBR) identifies shortwave flux deficits of 50 to 100 Wm-2 due to this undiagnosed liquid cloud associated with deep midlatitude cloud scenes, confirming that a simple assumption accounting for radar-lidar blind spots within the synergistic retrieval can result in significant improvements in retrievals of radiatively-important liquid cloud.
Validation activities are critical to ensure the quality, credibility, and integrity of Earth observation data. With the deployment of advanced active remote sensors in space, a clear need arises for establishing best practices in the field of cloud and aerosol profile validation. The upcoming EarthCARE mission brings several validation challenges arising from the multi-sensor complexity/diversity and the innovation of its standalone and synergistic products. EarthCARE is a joint ESA-JAXA mission to study interactions between clouds, aerosols, and radiation and their fundamental roles in regulating the climate system. Owing to its active remote sensing payloads, i.e. Atmospheric Lidar (ATLID) and Cloud Profiling Radar (CPR), EarthCARE is capable of performing range-resolved measurements of clouds and aerosols, which are demanding in terms of validation needs and related protocols. Furthermore, special protocols are also needed for the validation of radiance measurements from the opposite viewing direction.
With the involvement of international ground-based networks and airborne facilities in the EarthCARE validation community, there will be a wealth of correlative datasets for Cal/Val purposes. Efficient coordination will be needed between the instrument PIs (orbital and suborbital), the validation teams along with algorithm teams from related missions, and the end-user community (e.g., the Climate Change Initiative and Copernicus Earth observation program). The building blocks in this procedure will be lessons learned from previous Cal/Val studies (including CALIPSO, Cloudsat, GPM, and Aeolus), as well as the well-established QC/QA procedures adopted by the related European Research Infrastructures and metrological institutes (e.g., ACTRIS i.e. Aerosol Cloud and Trace Gasses Research Infrastructure, WMO-WRDC i.e. World Meteorological Organization - The World Radiation Data Centre). The approach will evolve from a review of the current literature, and will be consolidated in consultation with the community at workshops and via the EarthCARE Validation portal.
The presentation will address the development status of the protocols, and explain how the broader community can participate in their formulation. Contributions from the cloud and aerosol communities are expected to gradually broaden the coverage of the validation protocols. While initially focusing on EarthCARE, the best validation practices could be extended to other current and future missions (e.g., ESA Aeolus and its follow on mission, NASA EOS/AOS i.e. Earth System Observatory / Atmosphere Observing System, and WIVERN i.e. WInd VElocity Radar Nephoscope).
Clouds and aerosols play an essential role in the Earth's radiative balance and therefore condition its temperature and possibly its evolution. Knowledge of their life cycle is therefore essential to understand the Earth's climate but also to predict meteorological conditions. The EarthCARE mission, currently under development for launch in 2023, was decided in this direction to provide solutions to these questions. To do this, it will probe the Earth's atmosphere by measuring the profiles of clouds and aerosols but also radiation thanks to its set of on-board instruments including radar and lidar. The association of these two instruments is not accidental. Indeed, the synergy of radar and lidar measurements collocated over an area or a transect of the atmosphere is a powerful tool for removing any ambiguity about the atmospheric targets present. The AC-TC (ATLID CPR – Target Classification) product is then created in this goal for EarthCare. It is a synergistic product that combines observations from the Cloud Profiling Doppler Radar (CPR) and the high spectral resolution Atmospheric Lidar (ATLID) on board EarthCARE satellite (ESA-JAXA). This product relies on the complementary nature of radar and lidar measurements to properly define targets (hydrometeors and aerosols) present when probing the atmosphere. Each instrument is sensitive to different parts of particle size regimes, with ATLID probing the smaller particles (i.e. aerosols and cloud particles) and CPR more sensitive to the larger particles (i.e. ice cloud particles and precipitation) providing independent information (microwave or optical) in the region of overlap. The combination of their signals makes it possible to better classify the different atmospheric targets in comparison to the single instruments. Therefore, the cloud phase, precipitation and aerosol type within the column sampled by the two instruments can be identified. This product is a crucial step for the subsequent synergistic retrieval of cloud, aerosol and precipitation properties. Further, it can also be used on its own for statistical studies of atmospheric conditions via e.g. the statistical analysis of the cloud, aerosol and precipitation occurrence. The AC-TC product capitalizes on the enormous success of CloudSat/CALIPSO satellites on the A-Train constellation and their synergistic derivatives while providing a richer target classification due to the EarthCARE instruments.
The great benefit of EarthCARE for the global observation of the atmosphere lies in its synergistic approach of combining four instruments on one single platform. The two active instruments ATLID (atmospheric lidar) and CPR (cloud profiling radar) deliver vertical profiles of aerosol and cloud properties. The two passive instruments BBR (broad band radiometer) and MSI (multispectral imager) extend the information by adding observations of the total and shortwave radiation at top of atmosphere (BBR) and spectral radiances at the across track swath (MSI).
The systematic combination of active and passive remote sensing on a single platform is new and offers us great opportunities for synergistic retrieval approaches. Here, we will focus on the synergy of the vertical profiles measured with ATLID (‘curtain’) and the horizontal information added by the MSI (‘carpet’) to provide a more complete picture of the observed scene. For this purpose, the synergistic ATLID-MSI columnar descriptor AM-COL was developed in the EarthCARE processing chain. The MSI input is provided by the MSI cloud (M-CLD) and MSI aerosol (M-AOT) processor, the ATLID input is calculated by the ATLID layer processor (A-LAY). Cloud and aerosol information derived on the track are combined from both instruments and the additional ATLID information is transferred to the swath using the MSI observations. Two main results will be described in the following paragraphs, the cloud top height and the Ångström exponent.
The difference of the cloud top height measured with ATLID and retrieved from MSI is calculated along track. The obtained differences are transferred to the swath searching for similar nearby MSI pixels. Five homogeneity criteria are used: same cloud type, same cloud phase and surface type, the reflectivity at 0.67 µm and the brightness temperature at 10.8 µm. At nighttime, only a reduced set of criteria can be used (no cloud type and no reflectivity differences are available). Multilayer cloud scenarios have to be treated with special care and are investigated separately. Especially, thin cirrus clouds above liquid-containing clouds are hardly detectable with MSI.
The three simulated test scenes developed for EarthCARE are intensively used to test the algorithm performance. In case of the mixed-phase clouds and some thick cirrus clouds present in the so-called ‘Halifax’ scene, the difference between ATLID and MSI is found to be smaller than 1000 m. When looking at multilayer clouds or large convective systems this difference increases. For homogeneous cloud coverage, the transfer of the cloud top height difference to the swath can be easily applied. The test scenes offer the possibility to check the transfer to the swath for the more complicated multilayer scenes as well. The comparison to the model truth lets us estimate the performance of the synergistic product and provides an estimation for the detection limits when it comes to real data.
The aerosol optical properties are obtained at 670 nm and 865 nm (ocean only) by MSI and at 355 nm by ATLID. The ATLID-MSI synergy enables us to calculate the Ångström exponent (355/670 and 355/865) along track adding spectral information to the single wavelength lidar ATLID. The Ångström exponent adds additional information to the aerosol typing. Along the nadir track we can combine the vertical resolved aerosol classification from ATLID with the aerosol typing included in the MSI retrieval. Knowing the aerosol type along track seen by both ATLID and MSI enables us to transfer the aerosol information to the swath using the MSI measurements. For this purpose, an explicit aerosol test scene was developed additionally to the three standard EarthCARE test scenes. It could be shown that the MSI-based aerosol typing agrees with the columnar aerosol classification probabilities derived from ATLID for this scene.
The Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) has the scientific goal to achieve agreement of +- 10 W/m² for average SW/LW fluxes simulated using radiative transfer models acting on the retrieved profiles of cloud and aerosol properties and values inferred from collocated measurements made by the broadband radiometer (BBR).
The fluxes are estimated from BBR measurements at a single sun-observer geometry of the satellite using angular distribution models (ADMs). ADMs for SW radiances are created for different scene types and constructed from Clouds and the Earth’s Radiant Energy System (CERES) data using a feed-forward back-propagation artificial neural network (ANN) technique (Domenech et al., 2011) .
To further improve the solar flux estimates, a new method has been developed to possibly supplement the ANN technique (Tornow et al., 2020). The semi-physical log-linear approach incorporates cloud effective radius (Reff) and cloud topped water vapor as additional parameters which can significantly influence the TOA solar flux through changes in scattering direction and absorption respectively. A comparison with the state-of-the-art solar flux retrievals obtained from CERES and GERB instruments showed significant flux differences for cloudy scenes over ocean, which has been attributed to extremes in Reff and cloud topped water vapor (Tornow et al., 2021).
In the study presented here, the new method is evaluated and compared with the ANN technique. Since EarthCare is not yet in orbit, simulated EarthCare frames (1/8 of the orbit) are used. The frames were created by the EarthCare team using the Global Environmental Multiscale (GEM) model from Environment Canada and Climate Change and ESA instrument models.
Situations with large differences are analysed and interpreted in more detail. Further it is discussed in which situations the ANN technique could be complemented by the new method.
The upcoming EarthCARE mission will deliver horizontal and vertical aerosol information from one single platform. While ATLID (atmospheric lidar) will be responsible for vertically resolved aerosol properties, horizontal, columnar information about aerosol will be provided by MSI (multi-spectral imager) measurements. For the latter, the L2 aerosol processor M-AOT has been developed. It will operationally estimate aerosol optical thickness over ocean at 670 and 865 nm and, where possible, over land at 670 nm.
Measurements of the four available MSI bands in the visible to shortwave infra-red (670 nm, 865 nm 1650 nm and 2200 nm) are used within the underlying algorithm that consists of a separate land and ocean retrieval part. The ocean surface is parameterized following Cox and Munk (1954) and the land surface albedo is empirically parameterized relying on information about the vegetation type and the albedo at 2200 nm. Both algorithm parts are using an optimal estimation framework, whose forward operator relies on pre-calculated look-up tables that have been generated using radiative transfer code MOMO [Hollstein and Fischer, 2012] and are using the EarthCARE Hybrid End-To-End Aerosol Classification (HETEAC) model [Wandinger et al. 2016] to ensure consistency between ATLID and MSI based aerosol products.
Here, the underlying algorithm itself and product examples based on EarthCARE simulator test scenes will be presented together with algorithm verification and already known limitations of the area of application based on retrieval testing with MODIS input data.
Cox, C. and Munk, W.: Measurements of the roughness of the sea surface from photographs of the sun's glitter. J.Optical Soc. Amer. 44, Pages 838-850, 1954.
Hollstein, A. and Fischer,J.: Radiative transfer solutions for coupled atmosphere ocean systems using the matrix operator technique. Journal of Quantitative Spectroscopy and Radiative Transfer Volume 113, Issue 7, May 2012, Pages 536–548, 2012.
Wandinger, U., Baars, H., Engelmann, R., Hünerbein, A., Horn, S., Kanitz, T., Donovan, D., van Zadelhoff, G.J., Daou, D., Fischer J., von Bismarck, J., Filipitsch, F., Docter, N., Eisinger, M., Lajas, D. and Wehr, T.: HETEAC: The Aerosol Classification Model for EarthCARE. EPJ Web of Conferences, 119, 2016
Validation and calibration techniques for advanced space borne Earth observation instruments usually relies on ground based reference instruments to provide reference measurements able to assess the performance of the corresponding space instrument. The newly developed A-lidar instrument is one of the reference lidar instrument designed to meet all recommended requirements of the European Research Infrastructure for Short-lived Atmospheric Constituents - ACTRIS (actris.eu). The instrument aims to provide continuous data and will provide data to a wide range of users in order to facilitate high-quality Earth climate research.
The Alpha-lidar is designed to provide daytime backscatter, one daytime extinction, nighttime extinction and three depolarization products (3β, 1α-daytime + 1hsrl, 6α-nighttime, 3δ, 1 water vapour). To achieve these specifications, the instrument makes use of the rotational and vibrational Raman lines at 355, 532 and 1064nm. The instrument is designed to achieve full overlap around 200m for the primary lidar products like the raw and backscatter profiles and can go to lower altitudes for products where signal ratios are used (like the depolarization products).
The Alpha-lidar is split in an operational and an experimental part. The operational part is made up from three lasers and three telescopes, each emitter/receiver pair focusing on a different atmospheric property. The first receiver covers the elastic and Raman channels, the second is dedicated to the 532 and 355nm depolarization channels and the third receiver is dedicated to the 1064nm depolarization channels. In addition to the main lidar units, the instrument is also equipped with an experimental HSRL unit based on the Iodine filtering technique for 532nm. The entire instrument is enclosed in a custom container designed to accommodate the instrument for continuous operation in all weather conditions – see Figure 1.
The data retrieved with the instrument indicates good operation for both daytime and night-time setup. Once all quality assurance tests will be finalized, the instrument will be set for operational use and will be included in the operational programme of the Actris-RI as one of the reference instruments of the CARS central facility (Centre for Aerosol Remote Sensing).
The instrument could be one of the tools used in the EarthCARE and other similar mission in the validation programme. During the conference, several lidar derived products and associated errors, highlighting different atmospheric features will be presented. Product examples retrieved using the ACTRIS Single Calculus Chain are presented in Fig.2.
Acknowledgements:
The work performed for this study was funded by the Ministry of Research and Innovation through the Romanian National Core Program Contract No.18N/2019 and by the European Regional Development Fund through Competitiveness Operational Programme 2014–2020, POC-A.1-A.1.1.1- F- 2015, project Research Centre for environment and Earth Observation CEO-Terra.The research leading to these results has received funding from the European Union H2020, ACTRIS IMP grant no. 871115.
The Earth Clouds, Aerosols and Radiation Explorer (EarthCARE) mission will carry a depolarization-sensitive high-spectral-resolution lidar as well as a Doppler radar for global measurements of aerosol and cloud properties. These observations will be used in radiative transfer simulations to pursue the main objective of the mission: the radiative closure of the Earth’s radiation budget at top-of-the-atmosphere (TOA) using complementary on-board passive remote sensing observations for comparison. To achieve best possible agreements between the derived radiative fluxes from active remote sensing and passive measurements, the distributions of radiatively active constituents of the atmosphere have to be known. Especially the vertical distribution of water vapor should be precisely characterized, as it is spatially and temporally extremely variable. However, with water vapor profiles not being directly measured by EarthCARE, radiative transfer models have to rely on modeled vertical atmospheric water vapor distributions and standard atmospheric profiles.
During two airborne research campaigns over the western Atlantic Ocean, we conducted lidar measurements aboard the German HALO (High Altitude and Long Range) research aircraft above transported Saharan dust layers. All measurements indicated enhanced concentrations of water vapor inside the dust layers compared to the surrounding free atmosphere. We found that the embedded water vapor in the dust layers has a great effect on the vertical heating rate profiles as well as on TOA radiation. Hence, with the main goal of EarthCARE being the closure of the Earth’s radiation budget at TOA, particular attention has to be payed to a correct parametrization of the vertical water vapor profile and its possible radiative effects.
In our presentation, we will present the derived radiative effects of long-range-transported Saharan dust layers from EarthCARE-like remote-sensing with HALO during both boreal winter and summer. We will highlight the contribution of enhanced concentrations of water vapor in the dust layers to calculated TOA radiative effects as well as heating rates. Additionally, we compare our results to radiative transfer calculations where standard distributions of water vapor are used.
The EarthCARE satellite mission targets an improved understanding of the influence of clouds and aerosols on the global radiation budget. Toward this goal, a target accuracy of +/- 10 W/m2 has been defined as threshold for closure between observed top-of-atmosphere fluxes and 3D radiative transfer simulations on spatial domains with an area of 10x10 km2. For our understanding of climate processes and other applications, closure of surface radiative fluxes is also of critical importance, but is not currently covered by the EarthCARE mission concept. Radiative closure is however much more difficult to assess experimentally at the surface than at the top-of-atmosphere in particular due to the limited spatial representativeness of ground-based measurements for larger domains, if instantaneous fluxes are considered. A common approach is to average observations for longer time periods or a large number of similar situations to reduce this sampling uncertainty, but this approach is also susceptible to error cancellation. An alternative is the deployment of a dense network of radiation sensors to better sample the average radiation fluxes across a region of interest. A key advantage is the possibility to investigate deviations and assess closure on a case-by-case basis. Using observations from several past field campaigns with a low-cost pyranometer network, the feasibility of such a closure experiment for surface radiative fluxes based on EarthCARE products and processors is assessed. A method based on optimum averaging/ spatio-temporal Kriging is introduced to determine the sampling accuracy of a sensor network for domain-average instantaneous fluxes. For several typical cloud situations, the number of stations required to reach different target accuracies for the average flux across the EarthCARE closure domain size is determined. Based on these findings, potential instrumental configurations for such an experiment are described.
Aboveground forest biomass (AGB) accounts for between 70% to 90% of total forest biomass estimates, which are the central basis for carbon inventories. Estimation of forest aboveground biomass (AGB) is critical for regional forestry and sustainable forest management. Remote sensing (RS) data and methods offer opportunities of AGB broad-scale assessments providing data over large areas at a fraction of the cost with access to inaccessible places. Optical RS provides good alternative to biomass estimation through field sampling due to its global coverage, repetitiveness and cost-effectiveness. Radar RS has gained prominence for AGB estimation in recent years due to its cloud penetration ability as well as detailed vegetation structural information.
In this study, the potential of C-band SAR data from Sentinel-1, L-band SAR data from ALOS PALSAR, multispectral data from Sentinel-2 instruments and machine learning algorithms were evaluated for the estimation of AGB in a mountainous mixed forest in the Eastern part of the Czech Republic. The response variable was AGB (Mg/ha) estimated from normalized digital surface model nDSM (Forest Management Institute, http://www.uhul.cz) and field measurements (R2=0.84, nRMSE = 10%). The following cases of predictors were considered for AGB modelling: (1) Sentinel-1, Sentinel-2 and ALOS PALSAR, (2) Sentinel-1 and Sentinel-2, (3) ALOS PALSAR and Sentinel-2. SAR data were used with VV and VH polarizations. Normalized difference vegetation index NDVI, tasselled cap transformation TC (greenness, brightness and wetness) and disturbance index DI were calculated from multispectral Sentinel-2 data and together with single spectral bands were used as predictors. The modeling was performed with several machine-learning algorithms including, neural network, adaptive boosting and random decision forest. The AGB models were developed for coniferous, deciduous and mixed types of forest. AGB estimates for deciduous forest stands generally showed a weaker predictive capacity for all models, than AGB estimates for coniferous. The models with Sentinel-1 and Sentinel-2 predictors (case 2) had the weaker estimates comparing with models using ALOS PALSAR predictors (cases 1 and 3). The best model performance was achieved with the random decision forest algorithm and predictors derived from three sources of satellite data, Sentinel-1, Sentinel-2 and ALOS PALSAR. The proposed methodology can be applicable for Central European forest AGB mapping in large areas using the satellite optical and radar data.
Keywords: machine learning, ALOS PALSAR, Sentinel, forest productivity.
Acknowledgment: The study was supported by the Ministry of Agriculture of the Czech Republic, grant number QK1910150.
References:
Cairns et al. 2018. Root biomass allocation in the world’s upland forests. Oecologia 1997, 111, 1–11.
Chen et al. 2018. Estimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery. Forests, 9.
Pedregosa et al. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12.
Perko et al. 2011. Forest assessment using high resolution SAR data in X-band. Remote Sens., 3, 792-815.
For the past decades, wildfires have been increasing in frequency and in severity worldwide. These fires are source of substantial quantities of CO2 released in the atmosphere. They can also lead to the destruction of natural ecosystems and biodiversity. Fires are triggered by various factors that depend on the climate regime and on the vegetation type. Despite the large number of studies conducted on wildfires, post-fire vegetation recovery is still to be better understood, and depends highly on the vegetation type.
In this study, we present pre and post fire climate and vegetation anomalies at global scale, from several remotely sensed observations, such as air temperature (MODIS), precipitation (PERSIANN-CDR), soil moisture (SMOS), and terrestrial water storage (GRACE). Four remotely sensed variables related to vegetation are used and compared, from optical (the enhanced vegetation index (EVI) from MODIS) to microwave wavelengths opacities ranging from 2 to 20 cm: X-band, C-band, and L-band vegetation optical depth (X-VOD, C-VOD, and L-VOD), obtained with AMSR-2 and SMOS satellites. Fires are detected with the MODIS Active Fire product (MOD14A1_M). All datasets are resampled to SMOS grid (~ 25 km) and at a monthly timescale, for the time period June 2010 – December 2020.
We focus our analysis on five particular biomes : grasslands, tropical savannas, needleleaf forests, sparse broadleaf forests, and dense broadleaf forests. Anomalies of all data are computed over the major fires of the ten-year period, at global scale, then time series are readjusted on the fire date and averaged by biome.
We observe a severe drought before the majority of the fire events, and in particular over forests, which generally maintain a steady humidity all year. Pre-fire temperature anomalies are particularly significant in boreal needleleaf forests. In contrast, over savannas and grasslands, the pre-fire drought is slight while an increase in the biomass volume (e. g., available fuel) is supposed to expedite fires. As expected, C- and X-bands are more affected by sparse vegetation fires, as these frequencies are sensitive to the smaller branches and leaves; whereas L-band is particularly impacted over dense broadleaf forest fires, as it is a measurement of coarse woody elements (trunks and stems). For all biomes, the optical-based index (EVI) decreases significantly after fire but recovers quickly, as it observes only herbage and green canopy foliage. The contrasted recovery duration between L-VOD and the other variables over dense forests shows that fires affect coarse woody elements in the long term, while stems and leaves resprout faster. Our study shows the potential of SMOS L-VOD to monitor fire-affected areas as well as post-fire recovery, especially over densely vegetated areas. This study is also the first one to compare multi-frequencies VODs and to observe the impact of fire in L-VOD signal.
Figure - EVI, X-, C-, L-VOD, precipitation, SM, TWS, and temperature anomalies time series, shifted on the fire date, for (a) 520 points in the grassland biome; (b) 232 points in the savanna biome; (c) 701 points in the needleleaf forest biome; (d) 69 points in the sparse broadleaf forest biome; and (e) 48 points in the dense broadleaf forest biome. The missing values are mainly due to snow filtering.
The ability to capture 3D point clouds from LiDAR sensors and the advancement in algorithms has enabled the explicit analysis of vegetation architecture, branching characteristics and crown structure for the accurate estimation of Above Ground Biomass (AGB). The ability to have geometrically accurate 3D volume of vegetation reduce the uncertainty in AGB estimation without destructive sampling through the application of volume reconstructions algorithms on high-resolution point clouds from Terrestrial Laser Scanning (TLS). These methods, however, have been developed and tested on temperate and boreal vegetation with very little emphasis on the savanna vegetation. Here, we test the reconstruction algorithms for the estimation of AGB in a savanna ecosystem characterised by a dense shrub understory and irregular multi-stemmed trees. Leaf off multi scan TLS point clouds were acquired during the dry season in 2015 around the Skukuza flux tower in Kruger National Park, South Africa. From the multi scan TLS point clouds, we extracted individual tree and shrub point clouds. Tree Quantitative Structure Models (TreeQSMs) were used to reconstruct tree woody volume whilst voxel approaches were used to reconstruct shrub volume. The AGB was estimated using the derived woody volume and wood specific gravity. To validate our method, we compared the TLS derived AGB with allometric equations. TreeQSMs predicted AGB with a high concordance correlation coefficient (CCC) compared to the allometry reference, although tree crown biomass was overestimated especially for the large trees. The biomass of the shrub understory was described with reasonable accuracy using the voxel approach. These findings indicate that the application of 3D reconstruction algorithms improve the estimation of savanna vegetation AGB as compared to allometry references and combined tree and shrub woody biomass estimates of the savanna allow for calibration and validation for accurate monitoring and mapping at large spatial scales.
Tropical dry forests harbor major carbon stocks but are rapidly disappearing due to agricultural expansion and forest degradation. Yet, robustly mapping carbon stocks in tropical dry forests remains challenging due to the structural complexity of these systems on one hand, and the lack of ground data on the other. Here we combine data from optical (MODIS) and radar (Sentinel 1) time series, along with Lidar-based (GEDI) canopy height information, in a Gradient Boosting Regression framework to map aboveground biomass (AGB) in tropical dry forests. We apply this approach across the entire dry Chaco ecoregion (800,000 km2) for the year 2019, using an extensive ground dataset of forest inventory plots for training and independent validation. We then compare our AGB models to structural vegetation parameter such as percent tree and shrub cover, as well as Level-2 data from GEDI. Our best AGB model considered MODIS and Sentinel 1 data, whereas the additional use of GEDI-based canopy height data did not contribute substantially to model performance. The resulting map, the first high-resolution AGB map covering the entire ecoregion, revealed that there are still 4.65 Gt (+/- 0.9 Gt) of AGB in the remaining natural woody vegetation of the Chaco. Nearly three quarter of the remaining AGB in natural vegetation is located outside protected areas, and nearly half of the remaining AGB occurs on land utilized by traditional communities, suggesting considerable co-benefits between protecting traditional livelihoods and carbon stocks. Our models also had a much higher level of agreement with independent ground-data than global AGB products, which translates into a huge, up to 14-fold, underestimation of AGB in the Chaco by global maps in comparison to our regional product. Our map represents the most accurate and fine-scale map for this global deforestation hotspot and reveals substantial risk of continued high carbon emissions should agricultural expansion progress. In addition, by combining our AGB map with structural vegetation parameters we provide for the first time for tropical dry forests an understanding of carbon stocks in relation to the vegetation structure in these ecoregions. More broadly, our analyses reveal the considerable potential of combining time series of optical and radar data for a more reliable mapping of above-ground biomass in tropical dry forests and savannas.
Forests play a critical role in the global carbon cycle. However, estimates of forests carbon storage still have large uncertainties, especially in tropical forests. In addition, the distribution of above-ground biomass (AGB) at certain heights in forests (vertical AGB distribution) is completely underexplored at large scales with remote sensing. Synthetic aperture radar (SAR) and light detection and ranging (lidar) are common remote sensing tools used to estimate AGB. SAR has a large coverage imaging capability, and lidar can achieve high accuracy for measuring forest structure. The tomographic SAR mode (TomoSAR) of ESA’s upcoming P-band SAR satellite BIOMASS together with NASA’s Global Ecosystem Dynamics Investigation (GEDI) spaceborne lidar system will provide an unprecedented opportunity to estimate the vertical distribution of AGB at a regional or global scale. Our objective in this study was to develop and evaluate an approach to estimate the vertical distribution of AGB by combining observations from GEDI and a TomoSAR system (DLR’s airborne F-SAR) for the forest sites in Lopé and Mondah, Gabon, Africa.
According to the ESA WorldCover 10 m 2020 product, the research area in Lopé is covered by 79% trees and 19% grassland. The research area in Mondah is covered by 76% trees, 5% grassland, 15% permanent water bodies and 2% built-up. We used P-band TomoSAR data from the F-SAR system acquired during the ESA AfriSAR 2016 campaign, and GEDI level 2A (ground elevation, canopy top height, relative height metrics) and level 4A (footprint level above ground biomass) products. GEDI data were filtered based on the available quality flags and sensitivity metrics. There were 1,446 and 182 filtered GEDI footprints at 25 m resolution in Lopé and Mondah, respectively. Airborne lidar data from NASA Land, Vegetation, and Ice Sensor (LVIS) was used as reference.
Firstly, we applied the Capon method to reconstruct the reflectivity profiles from 10 tracks HV polarised P-band SAR images. We normalised the tomographic intensities into [0, 1] and used 0.1 as minimum threshold to cut profiles. The lowest peak of each profile was regarded as ground (relative height, RH0). The position above the highest peak and with intensity equal to 0.1 was selected as the RH100, considering the penetration capability of P-band microwave. The relative height (RH) retrieved from GEDI, TomoSAR and LVIS were compared at 25 m and 200 m spatial resolution, representing the resolution of LVIS and GEDI height products, and the resolution of future BIOMASS height products, respectively. Instead of using common height-AGB allometry relationships or power law models based on TomoSAR intensity at certain height level (e.g., 30 m), we attempted here to estimate total AGB from the TomoSAR profile directly. This approach also enables us to quantify the contribution of different TomoSAR height levels to the estimation of total AGB. Therefore, with GEDI AGB as response, random forest regression was then applied to estimate total AGB from TomoSAR profiles at 50 m (resolution of LVIS AGB product) and 200 m resolution (resolution of BIOMASS AGB product). The input features are TomoSAR intensities from 0 to 60 m in 5 m steps. Theses profiles were subset to start from R0 and the intensities above RH100 were set to zero for ensuring the fixed length (i.e., 13) of predictors. The samples were split into training set (80%) and testing set (20%). A five-fold cross-validation was carried out to test model’s transferability and the model with highest coefficient of determination (R²) from five cross-validation models was selected as the final model. In order to estimate the vertical distribution of AGB, we combined in-situ measurements and data from the Biomass And Allometry Database (BAAD) to describe the vertical AGB distribution of individual trees. Therefore, the crown was modelled as a sphere and the stem was modelled as a cone. These individual AGB profiles were then summed up to get the AGB profile at the plot scale. An optimal extinction factor for the P-band microwaves was estimated based on the root-mean-square-error (RMSE) between TomoSAR profiles and normalised field AGB profiles at grid level. Considering the discrepancy between TomoSAR RH100 and in-situ measured (or modelled) forest height, we subset the TomoSAR profiles corresponding to field plots using in-situ forest height rather TomoSAR RH100. By combining the estimate of total AGB with the optimised extinction factor and the TomoSAR profiles, we derived the vertically distributed AGB of the whole research area at 50 m resolution.
Our results show that the RH metrics from GEDI, TomoSAR and LVIS match well in the two study areas. For the cross-validation of the random forest model, models for Lopé (R² = 0.77) and Mondah (R² = 0.81) have similar performance at 50 m resolution, while model for Lopé (R² = 0.86) at 200 m performs better than that for Mondah (R² = 0.77). In both study sites, the R2 between predictions and reference data at 200 m resolution is around 0.2 higher than the R2 at 50 m resolution when these models are extended to the whole research area. The feature importance of random forest model in Lopé and Mondah show that the tomographic intensity between 20 m and 40 m contribute most to the total AGB. From the perspective of normalised root-mean-square (NRMSE), the forest height estimated from TomoSAR satisfies the requirement of the BIOMASS mission (BIOMASS: 30%, Lopé: 13%, Mondah: 11%), while the AGB from TomoSAR does not (BIOMASS: 20%, Lopé: 26%, Mondah: 28%). With an optimal extinction factor, the mean R² between reconstructed TomoSAR AGB profiles with their counterparts derived from field observations is 0.7. In summary, our results demonstrate the potential of combining spaceborne lidar measurements with future spaceborne TomoSAR measurements to get a more detailed insight in the vertical distribution of biomass in tropical forests and understand performance limitations of prospective BIOMASS products.
Fire risk assessment in forest stands relies on detailed information about the availability and spatial distribution of fuels. In particular surface fuels such as litter, downed wood, herbs, shrubs and young trees determine fire behaviour in temperate forests and constitute the primary source of smoke emissions. Remote sensing has been suggested as a potentially valuable tool to estimate the spatial distribution of fuels across large areas. However, accurately estimating surface fuel loadings in space across various fuel components using airborne or spaceborne sensors is complicated by obstruction from the forest canopy. In addition, mapping efforts have largely focused on simplified representations of fuel situations for specific modelling purposes, such as classifications into fuel types or fuel models, rather than estimating fuel loadings. In this work, we test whether the fusion of high-resolution LiDAR data (> 60 points/m²) with moderate- to high-resolution satellite imagery from the Sentinel-2 mission (10-20 m) allows to predict loadings of all surface fuel components using machine learning techniques. Our analysis is based on a field inventory of surface fuels in a mixed temperate forest with two dominating deciduous tree species (Fagus sylvatica, Quercus petraea) and two dominating coniferous species (Pinus sylvestris, Pseudotsuga menziesii). We produce fine-scale maps of surface fuel loadings that can form the basis for fuel management strategies as well as for calculations of fire behaviour characteristics and fire effects. Furthermore, we test how spatial variability in surface fuel loadings is captured when broader categories such as fuel types are used as mapping units. We investigate possible relationships of overstory tree species and cover with surface fuel loadings to reach more general conclusions about predictors for surface fuel loadings in temperate forests of central Europe. Our study contributes to a better understanding of fuel-related fire risk in temperate forests, which can help in developing appropriate forest management decisions and fire-fighting strategies.
Forests are essential in maintaining healthy ecosystem interaction on Earth. Forests cover around 30% of the world’s land area (FAO 2020), are estimated to contain over 500 Pg of above ground live biomass (Santoro et al. 2021) and represent a net sink of −7.6 ± 49 GtCO2e yr−1 (Harris et al. 2021). This makes them a crucial asset in the fight against climate change. Reliable information on forest biomass and carbon fluxes are needed to meet the reporting requirements of national and international policies and commitments like the Paris Agreement on Climate Change and the United Nations’ Sustainable Development Goals (Herold et al. 2019).
The Forest Carbon Monitoring project (https://www.forestcarbonplatform.org/) for the European Space Agency is developing Earth Observation (EO) based user-centric approaches for forest carbon monitoring. Different stakeholders have a common challenge to monitor forest biomass, but specific requirements vary between users. Policy-makers need information to make better decisions; public organizations need information for national and international level reporting. Companies require means to respond to increasing monitoring requirements, and tools for carbon trading. To support forestry stakeholders in these requirements, the project aims to develop a prototype of a monitoring platform which offers:
• A selection of statistically robust monitoring approaches designed for accurate forest biomass and carbon monitoring for varying large and small area requirements.
• Cloud processing capabilities, unleashing the potential of the increased volumes of high-resolution satellite data and other large datasets for forest biomass and carbon monitoring
In this presentation, we will give an overview of the project’s status, first results and further development. We will specifically highlight the research efforts to be undertaken in this project to improve usability of Earth Observation (EO) in meeting the varying user needs in forest biomass and carbon monitoring. The project started with an extensive review of policy needs and users’ technical, methodological and data requirements. Project user partners were interviewed for detailed requirements. This information was reflected against the current state-of-the-art of EO based forest carbon monitoring methods to identify the potential and limitations of EO based forest biomass monitoring. During the first year of the project, different approaches for data processing, biomass estimation and uncertainty assessment have been tested and evaluated. During the second year of the project, three different types of demonstrations will be conducted and validated:
• Local level demo designed to meet private company and other small area requirements.
• Provincial to national level demo aimed primarily at administrative agencies, often using National Forest Inventory (NFI) based approaches.
• Continental level demo, aiming to meet the needs of international organizations and other communities requiring continental level information.
The underlying policy and user requirements analysis including user interviews highlighted the variety of requirements that forestry stakeholders have towards forest monitoring in general and with a focus on biomass carbon. The needs could be coarsely grouped according to the three different types of demonstrations. Particularly the private companies with smaller interest areas need basic forest structural variables (e.g. basal area, diameter, height, volume) as much, or even more, than forest biomass and carbon data. These basic forest variables support their forest management decisions, but also allow biomass or carbon flux estimation when required. Public and international users, on the other hand, are more specific in their requirements regarding the variables or interest, as they are defined by the policy reporting requirements. For the national level organizations, existing approaches are heavily based on NFI field data, and any supporting EO based approaches need to be able to complement the existing monitoring systems in a productive manner. All users raised the importance of reliable and accurate monitoring and reporting of changes in forest biomass.
Due to the large amount of research conducted on the increased volume and variety of high spatial resolution EO data (see e.g. Miettinen et al. 2021), combined with processing capabilities enabled by cloud processing environments, the scientific readiness for EO based forest biomass monitoring is rising fast to the level required meet the user requirements. However, not all of the approaches are ready for operational use and should be further developed. Particular attention needs to be given to the fact that in operational circumstances the available datasets and monitoring conditions are rarely optimal, affecting the quality and consistency of the outputs. Key research issues that need more investigation to properly respond to the user requirements include:
• In an operational system responding to user needs, robust and transparent uncertainty assessment approaches and validation procedures are crucial. Reference data availability is rarely optimal in operational setting, requiring development of several uncertainty monitoring approaches validation procedures to be applied according to available datasets.
• In direct growing stock volume and biomass estimation, further development is needed on utilization of multi-temporal and multi-sensor datasets, combined with improved model calibrations. Approaches such as developed by Santoro et al. (2021) have proven useful for global level analyses, improved pixel level accuracies would enable derivation of reliable results for smaller interest areas and comparison between two time steps of mapping.
• For basic forest structural variable estimation, the availability and suitability of field reference measurements is a crucial issue and better integration with NFI data should be sought. Further improvements are also pursued e.g. from combined use of optical and radar datasets, as well as utilization of variable-specific estimation methods.
• A key feature of the platform to be developed in this project is integration of ecosystem simulation models into the system. The calibration of these models for different tree species and site conditions is still a significant knowledge gap even for European and particularly for global application. By means of data assimilation, the utilization of a modelling framework allows also to integrate multiple data sources for forest monitoring, enabling set-up of a continuously updating monitoring system. This is a major area of development in the long run for forest biomass and carbon monitoring.
The main results achieved in each of the research areas listed above will be reported in the presentation. Overall, the required knowledge on what needs to be done to set up the planned Forest Carbon Monitoring platform exists. However, the research currently conducted in the above listed topics aims to improve the reliability, usability and integration of EO based approaches to such a level that would enable wider onboarding of EO based approaches for forest biomass and carbon monitoring by forestry stakeholders. Although this project focuses on biomass and carbon monitoring, a forest monitoring platform should ultimately have a broader focus than carbon and cover effects of climate change, biodiversity, health, damages, invasive alien species, forest management, and the biomass use.
References:
FAO (2020) The State of the World's Forests 2020: In brief – Forests, biodiversity and people. Rome: FAO & UNEP. doi: 10.4060/ca8985en. ISBN 978-92-5-132707-4.
Harris, N.L., Gibbs, D.A., Baccini, A. et al. (202) Global maps of twenty-first century forest carbon fluxes. Nature Climate Change 11, 234-240. doi: 10.1038/s41558-020-00976-6.
Herold, M., Carter, S., Avitabile, V. et al. (2019) The Role and Need for Space‑Based Forest Biomass‑Related Measurements in Environmental Management and Policy. Surveys in Geophysics 40: 757–778. doi: 10.1007/s10712-019-09510-6.
Miettinen, J., Rauste, Y., Gomez, S., et al. (2021) Compendium of Research and Development Needs for Implementation of European Sustainable Forest Management Copernicus Capacity; Version 2. Available at: https://www.reddcopernicus.info/wp-content/uploads/2021/06/REDDCopernicus_RD_Needs_SFM_V2.pdf
Santoro, M., Cartus, O., Carvalhais, N. et al. (2021) The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations. Earth System Science Data 13: 3927–3950. doi: 10.5194/essd-13-3927-2021
Modern agriculture should combine the needs of productivity with those of environmental, economic and social sustainability, in an uncertain climate context due to the effects of climate change. Information useful for implementing advanced and integrated monitoring and forecasting systems to promptly identify the risks and the impacts of calamities and crop practices on agricultural environments are essential. Satellite Earth observation data revealed to be optimal for the aforementioned tasks because they can cover wide areas with different spatial resolutions and frequent revisit time, allowing the collection of historical series for long-term analysis, and they can be punctual thanks to the continuous acquisition of Copernicus constellations. Finally, from an economic point of view they are becoming more convenient thanks to the provision of free satellite data and dedicated software for their processing and display.
Agricultural ecosystems are characterized by strong variations within relatively short time intervals. Depending on the observation period the agricultural scenario can present itself in a totally different way, due to the difference in biomass and phenological cycle, that can be driven by cultivar and agricultural working, as well as weather conditions. These dynamics are challenging for crop monitoring and the knowledge of vegetation status can deliver crucial information that can be used to improve the classifiers performance.
In order to consider these aforementioned changes in agricultural vegetation and soil status, a multitemporal approach based on the study of time series of SAR indices can be successful. Time series of satellite images offer the opportunity to retrieve dynamic properties of target surfaces by investigating their spectral properties combined with temporal information on their changes.
This research work was carried out using SAR images from Sentinel-1 (at VV and VH polarizations) and COSMO-SkyMed (at HH polarization for Himage and VV+HH polarizations for PingPong) satellite sensors, which have been collected for a few years over an agricultural area in central Italy. The sensitivity of backscattering and related polarization indices at both C and X bands was investigated and assessed in several experiments. Both frequencies revealed to be sensitive to crop growth although with different behaviors according to crop type, the backscatter being influenced by the two phenomena of absorption and scattering caused by the dimensions of leaves and stems. In particular, crops characterized by large leaves and thick stems cause an increasing of backscattering as the plants grow and the biomass increases; whereas crops characterized by narrow leaves and thin and dense stems cause a decreasing trend of backscattering during the growth phase. Typical representatives of these two types of crops are wheat for the first case and sunflower for the second one.
First of all, an accurate crop classification was performed in order to identify the various crop types responsible for the different backscatter behaviors. The backscattering trends have been simulated by using simple electromagnetic models based on radiative transfer theory. Subsequently, algorithms based on machine-learning approaches and in particular Neural Network methods have been implemented for estimating the crop biomass by using multi-frequency and multi-polarization SAR data at C and X band.
To this scope, an «experimental + model driven» approach was adopted. In detail, the ANN training was based on subsets of experimental data combined with model simulations, while testing and validation have been carried out using the remaining part of experimental data. This strategy preserved the statistical independence between training and validation sets, by also overcoming the site dependency of the data driven approaches based on experimental data only, thus ensuring some generalization capabilities of the proposed algorithms.
Although still preliminary, the results obtained are encouraging, confirming the peculiar sensitivity of each frequency to different vegetation features, and enabling the mapping of vegetation biomass in the test area with satisfactory accuracy.
Forests cover an estimated 31% of the Earth's global surface and therefore constitute a significant part of the biosphere. They fundamentally impact the carbon cycle as vegetation is able to absorb carbon atoms from the atmosphere and store them, by building up new biomass during their natural growth process.
Forests also majorly affect the local water-cycle, as the transpiration process redistributes ground-water into the atmosphere, impacting air temperature and weather in the process.
Forests are also critical for biodiversity preservation, an estimated 80% of all known terrestrial flora and fauna lives in them. Similarly, about 880 million people collect and produce fuel from wood while 90% of people living in extreme poverty have their livelihoods depending on forests.
To accurately estimate forest tree parameters such as canopy height (CHM) and above ground biomass (AGB), it is common practice to measure them manually on-site. This process can be both invasive, when individual trees are cut down to precisely assess their properties, or non-invasive, when a less intrusive approach is preferred over absolute accuracy.
The process is very expensive and time consuming, especially in remote areas. Therefore, in-situ measurement campaigns are feasible only for small surveys.
Airborne LiDAR systems also remain impractical and expensive when both large scale and low revisit time measurements are required, while spaceborne ones do not allow yet for the retrieval of wall-to-wall measurements.
As a consequence, spaceborne imaging systems for earth observation (EO) have gained wide interest in the last decades, as a large list of sensors and techniques is available delivering remote-sensing data at very large scales and low revisit times. Since this does not directly quantify forest parameters, it is necessary to model the relationship between the acquired data and the on-ground forest parameters.
Allometric equations are commonly used to indirectly relate forest parameters with RS data, but they require parameters to be tuned to the specific forest types and geographic locations to achieve good performance.
More sophisticated, physics-based modelling approaches have also been studied for the regression of forest parameters.
These tend to achieve high accuracy in their estimates, while retaining great spatial resolution.
To obtain these results, large amounts of data, auxiliary information or ground reference samples are required to invert the models.
With the recent advancements in machine learning and computer vision techniques, and the availability of large dataset collections from EO sensors, new approaches to forest parameter regression are starting to be explored.
Deep learning architectures have already found great success for classification tasks, as they analyze the spatial context information to generate higher level abstractions, producing features which typically possess a larger descriptive and discriminative content than both the input imagery and hand-crafted features.
On the other hand, comparatively little work still exists regarding the regression of physical and biophysical parameters from RS data, presumably due to the limited availability of large quantities of reference-data required for supervised training.
Aiming at providing large-scale, frequently updated CHM and AGB forest parameter metrics, our research effort focuses on overcoming the aforementioned limitations by proposing a multi-modal CNN-based regression framework, requiring only a single set of either single- or multi-source satellite imagery as input.
This multi-sensor approach represents a flexible solution for the continuous monitoring of forests when one or more input data sources are unavailable, and to otherwise achieve the best possible performance. In particular, we focus on combining high resolution Sentinel-2 optical imagery with TanDEM-X-derived interferometric SAR (InSAR) products, as they both provide fundamentally complementary information, and have been demonstrated to correlate well with forest parameters. The proposed data-driven multi-sensor approach consists in a deep multi-branch CNN architecture, where each of the modalities is associated to a separate feature extraction (encoder).
The spatial context extracted from these branches is then fused to supply a rich set of input features to a shared regression branch. We use a so-called cross-fusion approach to do this, which consists in a dedicated convolutional architecture that fuses different modalities through a set of convolutions and concatenations.
To assess the capabilities of the multi-branch architecture to fuse Sentinel-2 and TanDEM-X data and the regression performance of our framework, four tropical regions in Gabon, Africa have been considered. These correspond to reference data that has been acquired in the context of the 2016 AfriSAR campaign and consist of AGB maps, which have been derived at a ground sampling distance of 50m from airborne LiDAR measurements by fitting allometric equations on specifically acquired field-plot measurements.
We expanded the analysis period from mid 2015 to early 2017, since in 2016 only one Sentinel-2 satellite was available, which, combined with the extended cloud coverage over tropical regions, meant that only a small amount of imagery would have been available. We assumed that the changes in biomass are negligible within this time frame, as mainly tropical primary forest is considered.
During the learning phase, the network was trained on 32x32 pixel patches, using the mean square error (MSE) of the prediction as loss function for the backpropagation step. A validation set was used to select the best performing network across 10 training iterations. Finally, a separate test set was used to provide unbiased accuracy assessments.
Preliminary results in Gabon using Sentinel-2 optical and TanDEM-X interferometric SAR products are promising, showing agreement with the underlying assumptions and expectations. The root mean square error (RMSE) obtained on the test set is equal to 70.2 Mg/ha with a coefficient of determination R²=0.73, which is in line with the state-of-the-art methods.
We expect further optimization of the network and a more representative data set for training to further improve the estimation accuracy, setting the ground floor for the establishment of an effective tool for monitoring forest resources.
Fire danger is a description of the combination of both constant and variable factors that affect the initiation, spread, and ease of controlling a wildfire. The UK routinely experiences wildfires, typically with spring and mid/late summer peak occurrences, though winter wildfires do occur. In recent years, large-scale wildfire events in the UK have led to heightened concern about their behaviour and impacts (i.e. Saddleworth Moor and Winter Hill wildfires in 2018, Marsden Moor in February 2019). For instance, there were almost 260,000 wildfire incidents attended between 2009/10 and 2016/17 in England alone (avg. 32,000/year), requiring over 300,000 hours Fire and Rescue Services (FRS) attendance. In addition, the UK has an unusually complex fire regime which incorporates traditional management burning (Harper et al 2018), and episodic small to large-scale wildfires. While the largest wildfires (in terms of burned area) are on mountain, heath and bog (Forestry Commission England 2019), the largest number of wildfires occur in built-up areas, in particular in the rural-urban interface (RUI).
To assess, manage, and mitigate wildfire impacts, the likelihood of uncontrollable wildfires (Fire Danger) and the risk that they pose across the UK, must be quantified. Therefore, this project aims to establish and test the scientific underpinning and key components required to build an effective, tailored UK FDRS for use in establishing the likelihood and impact of current and future fire regimes. In order to accomplish this objective, we will: (i) produce UK fuel (i.e. flammable biomass) maps at the national, landscape and site-level, and to develop a site-level understanding of fuel structure; (ii) assess the moisture regimes in key fuel types across UK landscapes; (iii) determine flammability, energy content and ignitability of UK fuels to establish UK fuel models; (iv) determine the ranges of UK fire behaviour for key fuel types; (v) identify wildfire hotspots and with consideration of assets and communities at risk under current and future climate scenarios; and (vi) incorporate stakeholder knowledge and resources as an integral part of research delivery and impact generation.
In this presentation, we firstly provide an overview of the different components of the project, and secondly we explain in detail the techniques employed to map static fuel types over the UK for the year 2018. Fuels correspond to the vegetation classes with similar fire behaviour, represented as the biomass contributing to the spread, intensity and severity of the wildfires (Chuvieco et al 2003, Burgan et al., 1998). Our mapping fuel type is based on machine learning approaches that include different satellite data sources: Sentinel-1 and 2, Landsat-8 and ALOS PalSAR-2 to generate both height and above ground biomass (AGB) maps at national level at 10 m resolution. The national height map is based on all the available LiDAR data in the country provided by Digimap, and AGB national map is produced with the contribution of Forest Research and the National Forest Inventory. Resulting maps will be analysed with the UK Centre for Ecology and Hydrology Land Cover Map for 2018 to produce the UK national static fuel types map for 2018.
Understanding the hemiboreal forestland's role in the continental carbon cycle requires reliable quantification of their growing stock (forest biomass) at the regional scale. Remote sensing complements traditional field methods, enabling indirect fine-scale estimation of forest 3D structure parameters (primarily tree height) from high-density 3D point clouds by avoiding destructive sampling. In addition, carbon accounting programs and research efforts on climate-vegetation interactions have increased the demand for canopy height information, an essential parameter for predicting regional forest biomass [1]. Unfortunately, relatively high acquisition costs prevent airborne laser scanning (ALS), the most efficient and precise tool, from regularly mapping forest growing stock and dynamics. Therefore, in the last decade, there has been increasing interest in using very high resolution (ground sample distance (GSD) < 0.5 m) satellite-derived stereo imagery (VHRSI) to generate canopy height models (CHM) analogous to LiDAR point clouds to support forest inventory and monitoring. Despite the large offer of VHRSI sensors on the market (GeoEye, WorldView etc.), image-derived CHM performance for retrieving the forest inventory data in various geographical regions is still not fully understood [2]. However, while the ALS can penetrate the forest canopy and characterise the vertical distribution of vegetation, the VHRSI image-based point clouds only represent the non-transparent outer “canopy blanket” cover of dominant trees.
Thus, the present study assesses the potential of VHRSI sensors for an area-based prediction of growing stock (m3 ha-1) by deriving the main forest canopy height metrics from image-based point clouds and validating against the Latvian National Forest Inventory (NFI) data. The study area represents a typical hemiboreal forestland pattern across the eastern part of Latvia with predominantly mature, dense, closed-canopy evergreen pine, spruce and deciduous birch, black alder tree species.
The study workflow was divided into two stages. During the first stage, the study: (1) evaluated and compared the vertical accuracy and completeness of CHMs derived from airborne and VHRSI stereo imagery to reference LiDAR data; (2) analysed the differences in the CHM height estimates associated with different tree species; (3) examined the effect of sensor-to-target geometry (specifically base-to-height ratio) on matching performance and canopy height estimation accuracy [3]. As a result, the study confirmed the tendency for canopy height underestimation for all satellite-based models. The image-based CHMs of forests with dominated broadleaf species (e.g., birch and black alder) showed higher efficiency and accuracy in canopy height estimation and completeness than trees with a conical crown shape (e.g., pine and spruce). Furthermore, this research has shown that determining the optimum base-to-height (B/H) ratio is critical for canopy height estimation efficiency and completeness using image-based CHMs. This study found that stereo imagery with a B/H ratio of 0.2–0.3 (or convergence angle range 10°–15°) is optimal for image-based CHMs in closed-canopy hemiboreal forest areas.
At the second stage (currently being implemented), the study: (1) establish allometric relationships between field-derived (harvester data) individual tree volume and tree height; (2) use estimations from individual tree LiDAR measurements as training/reference data of growing stock for study area plots; (3) utilise a two-phase analysis that integrates both individual tree detection and area-based approaches (ABA) for precise forest growing stock estimation by using CHMs derived from airborne and VHRSI stereo imagery; (4) assesses the effect of ABA plot size on image-based CHM models performance and accuracy. The main goal of this study stage is to demonstrate that where field-plot (NFI) data are spatially limited, it is possible to use a hierarchical integration approach based on upscale forest growing stock estimates from individual trees to broader landscapes [4]. As for practical application and as an auxiliary tool for planning and managing forestry, the proposed method of mapping forest growing stock based on image-derived canopy height metrics will also be of great importance. However, compared to LiDAR, it is vital to remember that optical sensors are strongly influenced by solar illumination, sun-to-sensor and sensor-to-target geometry. The insufficient sunlight during the winter season, and summer season clouds, sometimes restrict the use of satellite sensors, making image-based vegetation monitoring problematic. The positive results of this study will facilitate Latvian regional forest growing stock inventories, monitoring and mapping by using VHRSI sensors as an adequate low-cost alternative to LiDAR data.
1. Fang, J.; Brown, S.; Tang, Y.; Nabuurs, G.-J.; Wang, X.; Shen, H. Overestimated Biomass Carbon Pools of the Northern mid- and High Latitude Forests. Clim. Change 2006, 74, 355–368, doi:10.1007/s10584-005-9028-8.
2. Fassnacht, F.E.; Mangold, D.; Schäfer, J.; Immitzer, M.; Kattenborn, T.; Koch, B.; Latifi, H. Estimating stand density, biomass and tree species from very high resolution stereo-imagery-towards an all-in-one sensor for forestry applications? Forestry 2017, 90, 613–631, doi:10.1093/forestry/cpx014.
3. Goldbergs, G. Impact of Base-to-Height Ratio on Canopy Height Estimation Accuracy of Hemiboreal Forest Tree Species by Using Satellite and Airborne Stereo Imagery. Remote Sens. 2021, 13, 2941, doi:10.3390/rs13152941.
4. Goldbergs, G.; Levick, S.R.; Lawes, M.; Edwards, A. Hierarchical integration of individual tree and area-based approaches for savanna biomass uncertainty estimation from airborne LiDAR. Remote Sens. Environ. 2018, 205, 141–150, doi:10.1016/j.rse.2017.11.010.
The primary science objective of ESA’s Climate Change Initiative Biomass project is the provision of global maps of above-ground biomass for four epochs (mid 1990s, 2010, 2017 and 2018) and, based on these, being capable of supporting the quantification of above-ground biomass change. Biomass in this context is given as above-ground forest biomass (AGB), which is defined following the FAO as the dry weight of live organic matter above the soil, including stem, stump, branches, bark, seeds and foliage woody matter per unit area, expressed in t/ha. AGB is also an Essential Climate Variable (ECV) within the Global Climate Observing System (GCOS).
Part of the project was holding a Biomass Change mapping workshop in late 2020. Due to the COVID-19 pandemic, the workshop was organized as virtual event 19th October – 6th November 2020. This virtual technical workshop enabled scientists from around the globe engaged in biomass change mapping to jointly formulate the underlying principles of forest biomass change estimation and the challenges connected with this and to develop meaningful estimates of the accuracy of such change measures.
Special circumstances - the pandemic - require special measures, in this case the virtual format of the workshop considering different time zones. The virtual workshop was running over a period of three weeks, where active participation was restricted to short periods within that timeframe. A domain was acquired and a dedicated website was created. A limited number of online presentations was made available at the first day of the workshop. They had to be watched prior to the live online-discussions concerning different topics (3 x 10 minutes per topic). Throughout the workshop further discussion were also possible via a dedicated discussion forum. To make participation possible for attendees from different parts of the world, all discussion rounds were hosted live in specific time slots. In this way, every interested participant of the workshop was given the opportunity to attend at least at one discussion round per week.
The first week was addressing issues related to “Defining and quantifying biomass change”. Three subtopics were selected: 1) The nature of change, 2) Change on the ground (e.g. linking traditional inventories with EO, standardisation of change descriptions and metrics, permanent plots with repeat coverage, biome-based allometric models), 3) Assessing the accuracy of AGB change estimates. The second week of the virtual workshop was handling questions related to “Biomass change from space” with the three subtopics 4) Change algorithms and methodologies, 5) Space and time considerations and 6) Validation of change.
During the workshop, a number of key questions concerning biomass change in general have been formulated jointly together:
How can global AGB be best mapped according to the controls on AGB change and to assess maximum biomass potential (consideration of climate, topography, latitude, flora and fauna)? And is this the best way to consider the different controls on biomass amounts (with respect to soils, air temperature, water resources, species distributions)?
Should biomass change be understood as relative to previously recorded amounts or maximum site potential amounts? This leads to the next question: Under which constraints is the detection of biomass change less relevant (e.g. in old growth forests or temporary biomass reduction by thinning) and how can we define threshold for saying that biomass change occurred?
How do we best describe the time-scales of biomass change ranging from rapid losses within a few days to weeks (deforestation, thinning) to yearly or decadal changes connected with forest growth?
This was leading to the overarching question: What is the best framework to use and follow for global biomass change classification?
All these issues are important for the continuation of the ESA CCI Biomass initiative. But the outcomes may also serve as a guideline for future fields of research and method development, which can be of use for the upcoming ESA P-band Biomass mission, now planned for launch in 2023.
Reed belts are an important subclass of aquatic vegetation as they represent some of the most important Blue Carbon ecosystems in the Baltic Sea. However, their extent has so far not been precisely mapped except in local field sampling experiments related to national inventory programmes. Differences in Normalized-Difference Vegetation Index (NDVI) have long been used as an indicator of vegetation in remote-sensed datasets and Earth Observation (EO). The differences are particularly large over coastal areas, where in the peak growth season (mid-to-late summer), uniformly vegetated areas such as reeds, sedges, rushes, and macrophytes have NDVI values around 0.7 ± 0.2 (1σ), whereas plain water has a relatively low NDVI, –0.2 ± 0.2 (1σ). In this work, Bayesian analysis is applied to identify areas of aquatic vegetation in monthly NDVI composites downloaded from the Sentinel-2 Global Mosaic (S2GM) service. These are then used as indicators for the occurrence of reed belts or other seasonal or permanent vegetation in coastal zones. The method is akin to naïve Bayes and outputs a value that is proportional to the probability of the pixel representing vegetation in water. The prior used is sensitive both to the NDVI and distance from shore; areas closer to the coastline are considered more likely to host aquatic vegetation. The method requires as its source datasets monthly composites of the NDVI from S2GM and a truthful sea mask, extractable from either national coastline layers or suitable land use classes from the Copernicus Coastal Zones data set.
The interpretation of aquatic vegetation has been carried out for the Finnish coast and two Swedish pilot areas in the south (Stockholm) and north (Piteå) in the context of project "Blue Carbon Habitats – a comprehensive mapping of Nordic salt marshes for estimating Blue Carbon storage potential –a pilot study", funded by the Nordic Council of Ministers. Training and test data were obtained from field-mapped reed outlines, and the probabilistic product was converted to a binary interpretation and sieved for too small areas or for areas too far from the shoreline. The ground truth of reed outlines aligns in general with the outlines inferred from EO, though the resolution (10 m) of the EO data limits the support near the shore. Unlike hypothesized, the posterior probability density of the Bayes product was not found to be strongly linked to species distribution nor the field-mapped reed belt density, and a different line of analysis will need to be carried out if these variables are to be predicted with remote-sensed observations.
Forest above-ground biomass (AGB) is identified as an essential climate variable (ECV) by the Global Climate Observing System (GCOS). Monitoring its spatial distribution and temporal variations is therefore a necessity to improve our understanding of climate change and increase our ability to predict its impacts.
In this study, we develop a novel approach to estimate AGB by using the TC×H variable, i.e. the product of percent tree cover (TC) and forest height (H) variables. To do so, we have used already available global datasets of TC and H. Percent tree cover is estimated from optical imagery, and we have retained the following products: a) the Global 2010 Tree Cover at 30m resolution derived from Landsat (Hansen et al., 2013) and b) the 2019 Tree Cover Fraction at 100m resolution derived from Proba-V (Buchhorn et al., 2020). Forest height is estimated from spaceborne lidar data and spatially extrapolated with optical imagery, and we have used the following products: a) the 2005 Global Forest Heights dataset at 1km resolution based on ICESAT-GLAS (Simard et al., 2011) and b) the 2019 Global Forest Canopy Height dataset at 30m resolution based on GEDI (Potapov et al., 2021). The spatial resolution of the datasets is degraded to 1km resolution to produce two TC×H layers: one for epoch 2005-2010 using the Hansen tree cover and the Simard height, and one for epoch 2019 using the Buchhorn tree cover and the Potapov height.
Relationships between TC×H and AGB were established using reference AGB estimates obtained from airborne Lidar datasets available within the ESA Climate Change Initiative Biomass project in the form of 100m resolution layers in Brazil, Indonesia, Australia, and the United States. The rationale behind the choice of the TC×H variable is that it constitutes a proxy of the vegetation volume, which itself is related to the AGB through the wood volumetric density. When the spatial resolution is degraded to 1 km, it is expected that the wood volumetric density can be considered almost uniform at the biome level. Therefore, we have aimed at establishing biome-specific AGB/TC×H relationships that are used to produce global estimates of AGB at 1km resolution for epochs 2005-2010 and 2019. These relationships are established through regressions based on a 3-parameter model, with the parameters estimated at each epoch (2005-2010 and 2019) and biome (temperate and boreal, wet tropical, and dry tropical). The inversion of these relationships provides global AGB estimates at 1km resolution at the two epochs. The AGB difference between the two epochs can be used to estimate the AGB change at a decadal scale.
This new approach can provide a low-cost and accurate alternative for the production of AGB maps at the kilometric scale. The validation of the AGB estimates is on-going and the first analysis results are promising. A quantitative comparison with the existing global AGB datasets (in particular the recently released CCI Biomass datasets) will be presented, in order to evaluate the strengths and weaknesses of each approach and identify the complementarity between methods.
Buchhorn, M., Smets, B., Bertels, L., De Roo, B., Lesiv, M., Tsendbazar, N.-E., Herold, M., Fritz, S., 2020. Copernicus Global Land Service: Land Cover 100m: collection 3: epoch 2019: Globe. https://doi.org/10.5281/zenodo.3939050
Hansen, M.C., Potapov, P.V., Moore, R., Hancher, M., Turubanova, S.A., Tyukavina, A., Thau, D., Stehman, S.V., Goetz, S.J., Loveland, T.R., Kommareddy, A., Egorov, A., Chini, L., Justice, C.O., Townshend, J.R.G., 2013. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 342, 850–853. https://doi.org/10.1126/science.1244693
Potapov, P., Li, X., Hernandez-Serna, A., Tyukavina, A., Hansen, M.C., Kommareddy, A., Pickens, A., Turubanova, S., Tang, H., Silva, C.E., Armston, J., Dubayah, R., Blair, J.B., Hofton, M., 2021. Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sens. Environ. 253, 112165. https://doi.org/10.1016/j.rse.2020.112165
Simard, M., Pinto, N., Fisher, J.B., Baccini, A., 2011. Mapping forest canopy height globally with spaceborne lidar. J. Geophys. Res. Biogeosciences 116. https://doi.org/10.1029/2011JG001708
The key parameters provided by the Soil Moisture and Ocean Salinity (SMOS) mission over land are soil moisture (SM) and L-band vegetation optical depth (L-VOD). Although the retrieval of SM was the low hanging fruit of the mission, the information about vegetation has reached maturity causing a growing interest in testing the L-VOD product and using it in applications. Previous studies investigated the correlation between L-VOD and vegetation properties, such as vegetation height and forest biomass, made available by data bases.
In this paper, L-band vegetation optical depth (L-VOD) retrieved by SMOS is compared against vegetation parameters (RH100 and PAI) retrieved by Global Ecosystem Dynamics Investigation (GEDI) lidar instrument, recently launched by NASA. L-VOD was retrieved using the recent v700 version of SMOS level 2 algorithm. In order to manage the different spatial resolutions, GEDI parameters were averaged within SMOS pixels and a threshold to the minimum number of GEDI samples per SMOS pixel was applied. The investigation is multitemporal, since spatial correlations between monthly averages are investigated from May 2019 to April 2020, and a temporal extension to a two year interval is in progress.
The analysis was initially done for four large continents. For Africa and South America, mostly covered by tropical vegetation, the Pearson correlation coefficients between L-VOD and RH100 are higher than 0.8 in all months of the year. Conversely, seasonal effects are observed in North America and Asia, producing a lower correlation coefficient in colder months. RMS differences between L-VOD’s retrieved by SMOS and the ones obtained using a linear regression over RH100 are lower than 0.2 for all cases, and close to 0.1 for most cases. Using PAI in place of RH100 slightly lower spatial correlations are generally achieved.
The analysis was repeated considering three latitude belts: Northern, Tropical, and Southern. In the tropical belt the coefficients of L-VOD versus RH100 regression are stable and the Pearson correlation coefficient is higher than 0.88 for all months of the year. For Northern vegetation the regression slope and the Pearson correlation coefficient are stable from May to September, but decrease in the winter season. Lower Pearson correlation coefficients (about 0.7) are found in the Southern belt, due to reduced dynamic ranges of L-VOD and vegetation height.
All correlation coefficients between v700 L-VOD and RH100 are better with respect to L-VOD from previous level 2 versions. Overall, the obtained results confirm the good potential of L-VOD to monitor vegetation height in different environments. The synergic use of GEDI and SMOS L-VOD data sets can improve the accuracy and/or the timeliness in monitoring vegetation changes occurring at yearly or monthly time scales, such as deforestation, re-growth and desertification.
Passive microwave observations from 1.4 to 36 GHz already showed sensitivity to vegetation parameters, primarily through the calculations of the Vegetation Optical Depth (VOD) at individual window frequencies, separately. Here we evaluate the synergy of this frequency range for vegetation characterization over Tropical forest, through the estimation of two vegetation parameters, its foliage and the photosynthesis activity as described by the Normalized Difference Vegetation Index (NDVI), and its woody components and carbon stock as described by the Above Ground Carbon (AGC), using different combinations of channels in the considered frequency range. Neural network retrievals are trained on these two vegetation parameters (NDVI and AGC), for several microwave channel combinations, including the future Copernicus Imaging Microwave Radiometer (CIMR) that will observe simultaneously in window channels from 1.4 to 36 GHz, for the first time. This methodology avoids the use of any assumptions in the complex interaction between the surface (vegetation and soil) and the radiation, as well as any ancillary observations, to propose a genuine and objective evaluation of the information content of the passive microwave frequencies for vegetation characterization. Our analysis quantifies the synergy of the microwave frequencies from 1.4 to 36 GHz. For the retrieval of NDVI, the coefficient of determination R2 between retrieved and true NDVI reaches 0.84 when using the full 1.4 to 36 GHz range as will be measured by CIMR, with a retrieval error of 0.07. For the retrieval of AGC, the coefficient of determination R2 reaches 0.82 with CIMR, with an error of 21 Mg/ha. This study also confirmed that 1.4 GHz observations have the highest sensitivity to AGC, as compared to other frequencies up to 36 GHz, at least under tropical environments.
CIMR will provide valuable ecological indicators to enhance our present global vegetation understanding. Considering both vegetation aspects together (foliage photosynthesis activity and carbon stocks) offers a more robust and consistent characterization and assessment of long-term vegetation dynamics at large scale. CIMR will operate in synergy with MetOp-SG that carries the ASCAT scatterometer at 5.2 GHz. The complementarity between CIMR and the active microwave observations from ASCAT will also be evaluated, over Tropical forests, for vegetation characterization.
Blockchain Applications for Biomass Measurement and Deforestation Mitigation
Accurate estimation of forest above-ground biomass and its change over time is critical to forest conservation efforts and the consequence of the current voluntary carbon market. A positive change in biomass through planting more trees is important, but it must be noted that afforestation is merely the first step in producing an increase in global carbon sequestration. The true variable influencing the outcome of these efforts is the resilience of the biomass, and tree growth that resilience results in. Typically, forest densities range from 1000 to 2500 trees per hectare and have a carbon sequestration rate of up to about 10 tonnes per hectare per year in the tropics (0.8 to 2.4 tonnes in boreal forests, 0.7 to 7.5 tonnes in temperate regions and 3.2 to 10 tonnes in the tropics). By the age of 100, one broadleaf tree (commonly found in the tropical rainforests) could have sequestered up to one tonne of carbon. In comparison, chopping and burning just about 5 to 10 average-sized pine trees (~450 Kg dry weight each) instantly releases back all of the carbon that one hectare of trees captured in one year. This points to an extreme negation effect that logging can have, and can be a powerful point of change for policy makers.
The new forests that have been planted in the past two decades represent merely 5% of the net global carbon sink. While these numbers will grow with time, it is currently far more important to monitor and prevent deforestation of existing mature forests. It is also potentially one of the most difficult interventions to implement. Most deforestation and forest degradation are concentrated in the tropics because of both illegal logging activities that can go undetected due to the small spatial scales they occur at, and because of general forest clearing for cattle pasture and agricultural expansion. The most drastic effects of this rampant deforestation have been witnessed in the Amazon rainforest, which, at the time of writing this, has already / is strongly tending towards becoming a net emitter of carbon instead of a net sink, if the given rates of deforestation continue. Since the turn of the century, Brazil alone has released 32.5Gt of CO2 from deforestation and for reference, the average amount of CO2 annually emitted across the globe varies around 40 Gt. The rate of carbon emission also varies according to forest type. Old-growth primary forests, unlike their secondary and fast-rotation counterparts, can release carbon that has taken centuries to get stored. Hence, illegal logging, especially deep within primary forests needs early detection through regularly updated AGB change estimation.
At a policy level, accurate and timely detection of changes in biomass can be of value to countries that are attempting to recognize indigenous peoples and local communities as owners of their lands. Enforcing the rights of indigenous communities is a proven strategy to protect standing forests and enhance the carbon stored in them.
Governments across the globe have been actively incorporating forest landscape restoration measures in their policies. However, the effectiveness of these interventions towards carbon removal and climate change mitigation is difficult to quantify, especially in regions where there is insufficient biomass data. To fill these gaps in knowledge, various programs have provided open-source access to sensor-fused datasets at resolutions varying between 100m (ESA Climate Change Initiative) to 500m (NASA Pantropical AGB dataset).
Our work aims to utilize machine learning techniques such as Random Forest and XGBoost to train our algorithm on recently developed AGB datasets and Vegetation Indices extracted from satellite image radiances. The correlation between these inputs is then used to predict AGB in a different location and year. Since above-ground biomass accounts for about 27% of the entire carbon sequestered by a tree, the mathematical difference between the pixel values of two independent AGB predictions over simultaneous years allows us to estimate the total carbon sequestration at that pixel, in that year. Moreover, biomass change detection can help clearly identify logging activity, storm damage, restoration after forest fires and reforestation efforts being undertaken by forest managers. This allows for monitoring of policy implementations as well. Through sensor fusion with multiple satellite datastreams including SAR data, we can monitor large scale regions (including remote and inaccessible ones) at night, and through clouds (a major issue in imaging the tropics), with a rapid revisit rate.
Predictions of biomass change and carbon sequestration both occur at the pixel resolution of the training dataset, although we are also working on methods to increase pixel resolutions of the final products through the use of deep neural networks. The ability to monitor biomass change at 100m resolution and lower, will also assist with forest boundary change measurement. Forest boundaries can be substantial areas of change in forest expanse due to easy access, and estimating those changes can be very helpful to forest managers.. Moreover, we test several vegetation index combinations to better understand and potentially provide insight towards standardizing best practices for AGB prediction models globally.
Our work further contributes to solving the issue of the severe lack of multi-temporal AGB datasets. Projects such as the NASA GEDI mission, while providing very high-resolution AGB estimates, are currently only a one-time estimate. A similar problem occurs with the ESA CCI biomass datasets which are only valid for 2010, 2017, and 2018. Internally consistent AGB change datasets were only made available in December of 2021. Other datasets are also only available for one year randomly placed throughout the past two decades. This points to the fact that while work is being consistently done to acquire and formulate these datasets, there is a need for predictive software to estimate continuous time series of AGB change across several years, which can then be validated by ground truth and reference datasets whenever they become available.
Environmental protection does, however, come at the cost of economic growth; and this is a major hurdle, especially in developing nations. Therefore, a highly effective way of incentivizing countries to strictly control deforestation is to provide them monetary compensation through the use of carbon credits.
The carbon credit market has a key role to play in the solution to the problem of climate change and dClimate is entering this field using machine learning mechanisms and advanced AI algorithms. The current voluntary carbon market does not put enough emphasis on preventing deforestation. Only 32% of carbon offsets deal with preventing deforestation, and the IPCC believes the number of carbon removal projects must increase in order to limit warming to 1.5°C.
One of the primary issues with carbon offsets is the lack of transparency in the market. dClimate is revolutionizing the industry by verifying our own offsets using blockchain technology, which will allow buyers and project creators to view a transparent immutable ledger with all needed information available. To do this, we are creating an above-ground biomass estimation and monitoring system, which will allow us to price tokens based on the amount of carbon sequestered by existing biomass, and stored in the form of AGB change. Currently, the VCM registries are plagued with double or triple counting wherein the same parcel of land is sold multiple times. This not only prevents adequate market dynamics for the price of carbon offsets, but also limits the growth of the industry. In addition, the measurement of deforestation and carbon output is traditionally non-standard as it entails very bespoke methodologies which are not equipped to handle the problem at scale.
Leveraging the intersection of the simultaneous advances in many decentralized technologies such as Chainlink, IPFS, and distributed ledgers (Ethereum) we are able to create new financial ReFi (regenerative finance) primitives which facilitate carbon price discovery. Additionally, through the use of decentralized execution environments, all computations are transparent and can be inspected by anyone providing a platform for trustless interoperability without having to rely on centralized failure points. By creating this infrastructure we not only create the tools to mitigate deforestation, but also accurately measure other parts of the collective climate economy.
As a result of bringing together the new multi-resolution (spatial and temporal) datasets from multiple global organizations, increasing end-to-end transparency, and creating pressure on countries and stakeholders through a penalty system for anthropogenic biomass reduction, our work will develop detailed maps of Above Ground Biomass and its spatio-temporal variation. This will not only provide financial impetus to all nations who choose to use our services (especially to low-income countries with high biomass reserves), but will also assist the global scientific community by providing a rapidly updated database through an easily accessible API, aimed at creating a standard system of carbon emission control and sequestration measurement.
Since the collapse of the Soviet Union and being in transition to a new forest inventory system, Russia has reported almost no change in growing stock (+1.3%) and biomass (+0.6%). The Forest and Agriculture Organization of the United Nations (FAO) Forest Resources Assessment (FRA) national report 2020 presented 81.1 billion m3 of the growing stock volume (GSV) or 63.0 billion tons in above ground biomass (73.3 t/ha). FAO FRA national report is based on outdated State Forest Register. The first cycle of National Forest Inventory (NFI) was accomplished in Russia in 2020. The results of the new NFI were announced at the UN Climate Change Conference of the Parties (COP26) in Glasgow. The total GSV of Russian forest is 111.7 billion m3, or 38% higher than in the FAO FRA report. This discrepancy explained by the transition to a new inventory system – NFI and the gap in updating forest information.
In Russia, the long intervals between consecutive surveys and the difficulty in accessing very remote regions in a timely manner by an inventory system make satellite remote sensing (RS) an essential tool for capturing forest dynamics and providing a comprehensive, wall-to-wall perspective on biomass distribution. However, observations from current RS sensors are not suited for producing accurate biomass estimates unless the estimation method is calibrated with a dense network of measurements from ground surveys (Chave et al., 2019). Here we calibrated models relating two global RS biomass data products (GlobBiomass GSV (Santoro, 2018) and CCI Biomass GSV (Santoro & Cartus, 2019)) and additional RS data layers (forest cover mask (Schepaschenko et al., 2015), the Copernicus Global Land Cover CGLS‐LC100 product (Buchhorn et al., 2019)) with ca 10,000 ground plots to reduce nuances in the individual input maps due to imperfections in the RS data and approximations in the retrieval procedure (Santoro, 2019; Santoro et al., 2021). The combination of these two sources of information, i.e., ground measurements and RS, utilizes the advantages of both sources in terms of: (i) highly accurate ground measurements and (ii) the spatially comprehensive coverage of RS products and methods. The amount of ground plots currently available may be insufficient for providing an accurate estimate of GSV for the country when used alone, but they are the key to obtaining unbiased estimates when used to calibrate RS datasets (Næsset et al., 2020).
Our estimate of the Russian forest GSV is 111±1.3 billion m3 for the official forested area (713.1 million ha) for the year 2014, which is very close to the NFI aggregated results. An additional 7.1 billion m3 were found due to the larger forested area (+45.7 million ha) recognized by RS (Schepaschenko et al., 2015), following the expansion of forests to the north (Schaphoff et al., 2016), to higher elevations, in abandoned arable land (Lesiv et al., 2018), as well as the inclusion of parks, gardens and other trees outside of forest, which were not counted as forest in the State Forest Register. Based on cross-validation, our estimate at the province level is unbiased. The standard error varied from 0.6 to 17.6% depending on the province. The median error was 1.6%, while the area weighted error was 1.2%. The predicted GSV with associated uncertainties is available here (https://doi.org/10.5281/zenodo.3981198) as a GeoTiff at a spatial resolution of 3.2 arc sec. (ca 0.5 ha).
Acknowledgements
This study was partly supported by the European Space Agency via projects IFBN (4000114425/15/NL/FF/gp). The NFI data preparation and pre-processing were financially supported by the Russian Science Foundation (project no. 19-77-30015). FOS data preparation and processing for the Central Siberia were supported by the RSF (project no 21-46-07002).
References
Buchhorn, M., Bertels, L., Smets, B., Lesiv, M., & Tsendbazar, N.-E. (2019). Copernicus Global Land Service: Land Cover 100m: version 2 Globe 2015: Algorithm Theoretical Basis Document. Zenodo. https://doi.org/10.5281/zenodo.3606446
Chave, J., Davies, S. J., Phillips, O. L., et al. (2019). Ground Data are Essential for Biomass Remote Sensing Missions. Surveys in Geophysics, 40(4), 863–880. https://doi.org/10.1007/s10712-019-09528-w
Lesiv, M., Schepaschenko, D., Moltchanova, E., et al. (2018). Spatial distribution of arable and abandoned land across former Soviet Union countries. Scientific Data, 5, 180056. https://doi.org/10.1038/sdata.2018.56
Næsset, E., McRoberts, R. E., Pekkarinen, A., et al. (2020). Use of local and global maps of forest canopy height and aboveground biomass to enhance local estimates of biomass in miombo woodlands in Tanzania. International Journal of Applied Earth Observation and Geoinformation, 102138. https://doi.org/10.1016/j.jag.2020.102138
Santoro, M. (2018). GlobBiomass—Global datasets of forest biomass [Data set]. https://doi.org/10.1594/PANGAEA.894711
Santoro, M. (2019). CCI Biomass Product User Guide (p. 35). GAMMA Remote Sensing. https://climate.esa.int/sites/default/files/biomass_D4.3_Product_User_Guide_V1.0.pdf
Santoro, M., & Cartus, O. (2019). ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the year 2017, v1 [Application/xml]. Centre for Environmental Data Analysis (CEDA). https://doi.org/10.5285/BEDC59F37C9545C981A839EB552E4084
Santoro, M., Cartus, O., Carvalhais, N., et al. (2021). The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations. Earth System Science Data, 13, 3927–3950. https://doi.org/10.5194/essd-13-3927-2021
Schaphoff, S., Reyer, C. P. O., Schepaschenko, D., Gerten, D., & Shvidenko, A. (2016). Tamm Review: Observed and projected climate change impacts on Russia’s forests and its carbon balance. Forest Ecology and Management, 361, 432–444. https://doi.org/10.1016/j.foreco.2015.11.043
Schepaschenko, D., Shvidenko, A. Z., Lesiv, M. Yu., et al. (2015). Estimation of forest area and its dynamics in Russia based on synthesis of remote sensing products. Contemporary Problems of Ecology, 8(7), 811–817. https://doi.org/10.1134/S1995425515070136
Forage provision is an important indicator of rangeland health and is reliable for evaluating land degradation. In dry rangelands it is largely limited by moisture availability compounded with grazing pressure as it sustains a significant proportion of livestock-based systems. For sustainable and adaptive management, parameters such as biomass production and forage quality are of key interest. Yet, their quantification and monitoring still remains laborious and costly. Advancing remote sensing technologies such as hyperspectral readings and drone imaging enable rapid, repeatable and non-destructive estimations of these parameters that can be applied over large spatial scales. While these are increasingly being integrated for ecological research, robust prediction models supported by field data is still lacking, especially in highly dynamic systems like semi-arid savannahs. In our study we aim to answer the following research questions: (1) to what extend can we model forage provision (quality and quantity) from resampled hyperspectral data? (2) Can we model forage provision from UAV-based multispectral imagery calibrated with field spectrometer prediction models? (3) How do artificial hyperspectral data, interpolated from multispectral data enhance the prediction quality? (4) How does forage provision vary between two differently managed rangelands? To address these questions, we took hyperspectral readings with a field spectrometer from herbaceous canopies along transects in two management types in a Namibian semi-arid savannah. Plant biomass samples were collected at the reading areas to measure forage quantity and forage quality. Machine learning and deep learning methods were used to establish hyperspectral prediction models for both forage quality and quantity. We applied these models to hyperspectral readings from a broader area. For upscaling the hyperspectral models, we acquired drone multispectral imagery along the same transects. Multispectral prediction models were set up using the predicted values from the hyperspectral prediction model. As predictors for the model we used the pure spectra, derived vegetation indices and artificial hyperspectral data from interpolating the multispectral bands. We then created forage quantity and quality maps to visualize and compare forage provision dynamics in the two management systems. While field-based hyperspectral models offer greater spectral resolution for assessing complex forage quality parameters, and drone imagery offer unprecedented spatial and temporal data products for mapping forage parameters at a landscape level, independently they are limited. Thus, emerging UAV-based hyperspectral imagery minimizes these discrepancies, a technology that will catapult remote sensing to map even more complex variables and resolve ecological questions.
Agriculture is a critical source of employment in rural Colombia and is one of the sectors most affected by climate and climate change and where solutions to key challenges affecting the productivity and sustainability of forages and the livestock sector are required. Increasing yields of forage crops can help improve availability and affordability of livestock products while also easing pressure on land resources through enhanced resource utilisation. This study aims to develop remote sensing-based approaches for forage monitoring and biomass prediction at local and regional levels in Colombia. Local access to such information can help improve decision making and increase productivity and competitiveness while minimising impacts on the environment. Ten locations were sampled between 2018 and 2021 across climatically distinct areas in Colombia, comprising five farms in Patía in Cauca department, four farms in Antioquia department, and one research farm at Palmira in Valle de Cauca department. Ash content (Ash), crude protein (CP %), dry matter content (DM g per square meter) and in-vitro digestibility (IVD %) were measured from different Kikuyu and Brachiaria grasses during the field sampling campaigns. Multispectral bands from coincident Planetscope acquisitions along with various derived vegetation indices (VIs) were used as predictors in the model development. To determine the optimum models, the improvement capabilities of using an averaging kernel, feature selection approaches, various regression algorithms and metalearners (simple ensembling and stacks) were explored. Several of the applied algorithms have built-in best feature selection functions so to test model improvement capabilities of an independent feature selection approach for algorithms that have one built-in and for those that do not, all models were run a) with no feature pre-selection, b) with Recursive Feature Elimination (RFE, package: caret) and c) with Boruta (package: Boruta) feature selection. A range of algorithms (n=26) belonging to classes of decision trees, Support Vector Machines, Neural Networks, distance-based methods, and linear approaches were tested. All algorithms including metalearners were tested with each of the three feature selection approaches while employing 10-fold cross-validation with 3 repeats. In the performance evaluation based on unseen test data, CP and DM was predicted relatively well for all three sites (R2 0.52 – 0.75, RMSE 1.7 – 2.2 % and R2 0.47 – 0.65, RMSE 260 – 112 g/m2 respectively). As part of the study, the investigation was carried out in cooperation with smallholder farmers to determine their attitudes and potential constraints to mainstreaming such technologies and their outcomes on the ground. Through improving communication between earth observation and agricultural communities and the successful integration of satellite-based technologies, future strategies can be implemented for increasing production and improving forage management while maintaining ecosystem attributes and services across tropical grasslands.
The rise in atmospheric CO2 due to anthropogenic emissions is the leading cause of climate change. In order to avoid reaching tipping points in the Earth System, efforts to cut down emissions and compensate for existing atmospheric CO2 are pursued. Within this context, nature-based solutions such as reforestation, afforestation and agroforestry favour the potential of carbon sequestration through tree growth and health. Though, the kick-off of planting activities is insufficient for these initiatives to present a substantial impact. Routine field maintenance and restoring the ecosystems are capital. Since several decades are necessary before observing the positive impacts of such initiatives, long-run investments become indispensable, and with them, monitoring and metrics. Current methods that measure carbon storage in forests and their evolution are based regionally on Forest National Inventories and globally on products derived from satellite imagery at coarse resolution. These solutions frequently lack the temporal and spatial coverage to ensure traceability and transparency to monitor most interventions.
In order to follow the evolution of local and regional scale nature-based projects effectively, a monitoring strategy relying on remote sensing is presented, covering the needs of new afforested sites and reforestation and agroforestry management in existing forested areas. The methodology revolves around skilled monitoring of Above-Ground Biomass (AGB) -one of the most reliable means to assess natural carbon sinks. The proposed monitoring system covers regional stakeholders' needs to trigger payments for the environmental services implemented by over a thousand local farmers.
At high resolution, several reforestation efforts in the Sahel area in Africa are being monitored using Very High Resolution (VHR) imagery from Airbus' Pleiades mission. The detection of individual trees is possible thanks to the mission's pan-sharpened resolution of 0.5m. A monitoring system for larger-scale regional projects using medium-resolution imagery of 20m pixels is also presented. For the latter, data sources include Copernicus Sentinel-1 and Sentinel-2 missions for Synthetic Aperture Radar (SAR) and Multi-Spectral imagery, respectively, LiDAR-based AGB data provided by the Global Ecosystem Dynamics Investigation (GEDI) mission on board of the International Space Station (ISS), Land Cover maps and Digital Elevation Models (DEM).
In order to train, test and validate regression methods that predict the evolution of carbon stored in individual trees, reliable and standardised measurements at tree level are necessary. A dedicated in-situ survey strategy has been designed collaboratively with local communities and field experts in the Sahel to overcome the limitations of horizontal GNSS resolution and obtain reliable measurements at the tree level.
While the in-situ surveying is not taking place due to security constraints in West Africa, a preliminary study is carried out in Catalunya, Spain, benefiting from an AGB dataset obtained from the Spanish 4th National Forest Inventory (NFI-4). This proof of concept shows the correlations of the individual data sources with field biomass and the combined use of all the datasets in the methodology to address biomass assessment. The study over the region of Catalunya serves as a basis to transfer the methodology to the Sahel region, where the aforementioned nature-based projects are taking place.
Further to the CO2 sequestration potential, the beneficial side-effects of nature-based solutions include improved soil quality, increased crop yield, ground temperature and biodiversity recovery, and positive socio-economic impacts, which are rarely quantified. Additional metrics are presented as valuable information to the overall Key Performance Indicators to add a comprehensive vision of the reforestation and afforestation activities on the local communities involved.
The presented approach is developed within the JESAC project (https://www.jesac-project.com), integrating a virtual monitoring platform. Payments for Environmental Services (PES) will be triggered once the trees have stored a certain amount of carbon. These payments cover in-field activities for land restoration and voluntary carbon offsetting, which is traced transparently through blockchain technology.
Several studies have highlighted the saturation effects of L-Band SAR signal sensitivity at the increasing of forest density. In those cases, a direct modeling approach or an empirical regression guided by ground sampled measurements can be not effective to estimate the Above Ground Biomass (AGB) values higher than ~ 150 – 200 t/ha. Machine learning approaches are therefore proposed in recent literature to deal with these types of constrain with active (and passive) microwave monitoring of forest, by including different type of ancillary information.
In the ESA MAFIS project we have teste the feasibility of a Random Forest (RF) procedure including SAR and optical data. The strength of the RF solution consists in the possibility to include different type of Earth observation quantities in addition to the L band backscatter for characterizing the AGB. In this way the L band SAR signal is coupled with multispectral optical indexes, for limiting the saturation effects of the SAR signal, without explicitly dealing with the complex non-linearity of the combination of the input variables. Anyway this hypothesis can be effectively exploited only if a sufficient set of reference data for the AGB are available. In general, the use of in situ measurements sampled on tens to hundreds grounds plots cannot be sufficient to proper finalize the training phase of a data drive algorithm. In the MAFIS project we tried to overcame such limitation by exploiting recent aerial LiDAR data made available for the Veneto Region over the alpine areas of Lorenzago di Cadore and Bosco del Cansiglio (Noth-East of Italy). Those areas were affected by the Vaia storm, occurred from the 26th to the 30th October 2018. This event caused a dramatic loss of forest area in different Italian regions due to strong winds that pulled down a massive quantitative of trees. Regione Veneto acquired a large set of aerial LiDAR data after the Vaia storm for mapping the extension of the affected areas. This quite unique dataset represent a good opportunity to evaluate the effectiveness of a Random Forest approach for the AGB retrieval by means of the fusion of L-band SAR data and multispectral data. In fact the forest areas acquired during the flights spans for several tens of hectares and provides thousands of training examples of intact areas over the forest AGB can be derived from the LiDAR measurements of tree’s height. In particular, the LiDAR data have been processed to derive the Digital Terrain model (DTM) and the Digital Surface model (DSM). These latter have been used to derive the tree height layer over the forest considered areas. Finally, a corrected version (fitted to several local data acquired during the MAFIS project in situ survey) of dendrometric tables of the second Italian National Forest Inventory (INFC), which define volume estimation equations adapted to the different forest species, have been applied to the most common tree species of the considered Alp regions, i.e. Fagus, Abies Alba and Larix Decidua, for computing the LiDAR based AGB layer, which ranges between ~200 and ~1000 m3/ha over the analysed regions. The latter has been the divided in the training and test set, used respectively to train the RF model and to test its performances.
The input data to the RF model are the HH and HV backscattering coefficients, extracted from ascending and descending SAR ALOS-2 PALSAR-2 L1.1 products, and multispectral reflectance (in VIS, NIR an SWIR), extracted from Sentinel-2 L2A products. Both the ALOS-2 and the Sentinel-2 data have been collected on specific dates comparable with the time range of the aerial LiDAR acquisitions.
The results of the trained RF model evaluated over the independent test set shown very encouraging results, with a correlation coefficient higher than 70% and reporting very coherent behavior of the spatial patterns of AGB within the mountain landscape. Finally, the insurgence of the saturation effects is registered for a threshold of about 900 m3/ha.
Reducing uncertainty in the estimation of aboveground biomass (AGB) stocks is required to map global aboveground carbon stocks at high spatial resolution (< 1 km) and monitor patterns of woody vegetation growth and mortality to assess the impacts of natural and anthropogenic perturbations to ecosystem dynamics. The NASA Global Ecosystem Dynamics Investigation (GEDI) is a lidar mission launched by NASA to the International Space Station in 2018 that has now been collecting science data since April 2019 and is expected to continue to at least January 2023. These observations underpin efforts by the NASA Carbon Monitoring System (CMS) to advance pantropical mapping of forest and woodland AGB and AGB change through fusion of GEDI with interferometric Synthetic Aperture Radar (InSAR) observations from current and upcoming missions. These aim to facilitate much needed improvements to national-scale carbon accounting and other monitoring, reporting and verification (MRV) activities across forest and woodland ecosystems in pantropical countries. Here we present a novel fusion approach that combines billions of GEDI measurements with high resolution InSAR data acquired between 2010 and 2019 by TanDEM-X, resulting in wall-to-wall canopy height and AGB estimates at 1 ha spatial resolution across the pantropics, including Brazil, Gabon, Mexico, Australia. We first present AGB prediction models that use GEDI measurements of canopy height and cover at the scale of field plots typically used for calibration and validation of satellite mapping of AGB. These include the footprint scale (0.0625 ha) and, through aggregation at International Space Station (ISS) orbital crossovers, the 1 and 4 ha scales specified by upcoming spaceborne InSAR missions designed for global mapping of AGB (NASA/ISRO NISAR, ESA BIOMASS). We show that the addition of GEDI measurements improved 1 ha TanDEM-X canopy height RMSE by 16.6-38.2% over pilot countries and reduced the magnitude of systematic deviations observed using TanDEM-X alone. Finally, using new models that link GEDI plot scale estimates of AGB with vertical and horizontal canopy structure metrics from TanDEM-X, and Generalized Hierarchical Model-Based inference (GHMB) to propagate uncertainty, we compare the precision of estimates achieved through our fusion approach to those achieved using GEDI or TanDEM-X alone. This study defines good practices for linking GEDI observations with those from satellite imaging SAR that are based on refined measures of quality and geolocation, and their impact on estimates of AGB uncertainty achieved through fusion of GEDI with satellite InSAR. Our approach takes full advantage of more direct estimates of structure and AGB from GEDI, and further highlights the importance of a formal and transparent framework to estimate uncertainty and enable the separation of true and spurious change in the monitoring of AGB across pantropical forest and woodland ecosystems.
Earth observation is a necessary resource in understanding some of the world’s most sensitive ecosystems. Kenya’s coastal communities have suffered greatly from land degradation and poor soil health due to climate change and over farming. This project aims to look deeper into the ways that we can save these rural communities by using satellite imagery to get a better understanding of the Green World Campaign’s regenerative efforts throughout coastal Kenya. Using very high resolution (VHR) imagery from MAXAR’s Worldview satellites, the intent is to understand how this extremely high resolution imagery coupled with field data and random forest classification methods can help to identify a more accurate understanding of soil heath and tree growth, thus directly impacting the future livelihood of these communities.
In conjunction with the ever-evolving high resolution imagery and SmallSat constellation expansion, we propose a conceptual model for both the public and private sectors that marries accurate data with direct funding opportunities using cryptocurrencies through biomass and carbon monitoring practices. This uniquely holistic model has proven it is capable of restoring the economy and ecology of communities struggling on the front lines of climate change. This regenerative model's “people-and-planet” approach addresses the health of both landscapes and communities, leading to improved rural livelihoods, nutrition, biodiversity, soil health, and carbon “drawdown". Earth Observation plays a critical role in this process, for both understanding the past landscape's soil levels as well as looking toward future imagery analysis. Having the ability to visualize this landscape change in real-time is a direct confirmation of progress on both the micro and macro levels of climate resilience. Increasing the effectiveness of studying remote regions will not only be of importance to rural communities in Kenya but will also be usable in other remote areas of the world, helping to gain a greater perspective of global system change and forest abundance.
Vegetation biomass is a globally important climate-relevant terrestrial carbon pool. In tundra permafrost lowland landscapes North of the treeline, the vegetation low-level structure poses a challenge for the derivation of the plant biomass both, from optical and SAR satellite remote sensing. Still, a range of tundra types have some spectral or structural characteristics for land cover classification. Higher vegetation, such as high-growing shrubs occur in small patch sizes. In this study we investigate to which extent data from Sentinel-2 and Sentinel-1 missions provide a landscape-level opportunity to upscale tundra vegetation communities and biomass for high latitude terrestrial environments.
We assessed the applicability of landscape-level remote sensing for the low Arctic Lena Delta region in Northern Yakutia, Siberia, Russia. The Lena Delta is the largest delta in the Arctic and is located North of the treeline and the 10 °C July isotherm at 72° Northern Latitude in the Laptev Sea region. Vegetation and biomass field data from Elementary Sampling Units ESUs (30 m x 30 m plot size) and shrub samples for dendrology were collected during a Russian-German expedition in summer 2018 in the central Lena Delta.
We evaluated circum-Arctic harmonized ESA GlobPermafrost land cover and vegetation height remote sensing products covering subarctic to Arctic land cover types for the central Lena Delta. The products are freely available and published in the PANGAEA data repository under https://doi.org/10.1594/PANGAEA.897916, and https://doi.org/10.1594/PANGAEA.897045.
We also produced a regionally optimized land cover classification for the central Lena Delta based on the in-situ vegetation data and a summer 2018 Sentinel-2 acquisition that we optimized on the biomass and wetness regimes and extended the land cover classification for the full Lena Delta with consistent Google Earth Engine aggregated Sentinel-2 reflectance covering the summer 2018 period. We also produced biomass maps derived from Sentinel-2 at a pixel size of 20 m investigating several techniques. The final biomass product for the central Lena Delta shows realistic spatial patterns of biomass distribution, and also showing smaller scale patterns. However, patches of high shrubs in the tundra landscape could not spatially be resolved by all of the landscape-level land cover and biomass remote sensing products.
Biomass is providing the magnitude of the carbon flux, whereas stand age is irreplaceable to provide the cycle rate. We found that high disturbance regimes such as floodplains, valleys, and other areas of thermo-erosion are linked to high and rapid above ground carbon fluxes compared to low disturbance on Yedoma upland tundra and Holocene terraces with decades slower and in magnitude smaller above ground carbon fluxes.
Earth’s population is still growing. The percentage of population living in urban regions was only 30% in 1950, increasing to 55% in 2018. It is projected that 68% of the population will live in urban areas by 2050. As so many people live in urban environments, the topic of urban climate affecting quality of life and public health is of great importance. Several studies found that urban heat stress negatively affects population living in more urbanized regions.
The surface urban heat island (SUHI) effect occurs when an urban area is warmer than its surroundings. Usually, it is computed as the difference in temperature of the rural region relative to the urban core region. The present work uses Land Surface Temperature (LST) data, retrieved through the Meteosat Second Generation geostationary satellite with 3 km at nadir, every 15 minutes.
Paris, Madrid and Milan were chosen as case studies to evaluate how the SUHI varies along the day/year and how does the rural land cover affect the surface heat island intensity. We found diurnal and seasonal variability of SUHI between cities, as a result of different climates.
Results also show that computing the SUHI against different rural land covers results not only in different SUHI intensities but also in different diurnal and seasonal cycles due to the seasonality of the rural land cover. This has consequences when analyzing trends of SUHI for there is much land use and land cover change in the region surrounding cities. This implies that some of the variability and trends in SUHI may be attributed not only to the urban region but also the rural one.
The time dimension is also a key factor, as SUHI intensity and even signal can change throughout the day. Peak intensity may be reached at different times of day/year and poor temporal resolution may not capture/represent this dynamic behavior.
In summary, our results are twofold: (1) noticing the importance of rural land cover as an equal part in the urban/rural relationship in the SUHI topic and (2) stressing that low temporal resolution data, although useful for its spatial characteristics, only tells half the story when considering the SUHI variability.
Climate change has caused dramatic reductions in Earth’s ice cover, which has in turn affected almost all other elements of the environment including global sea level, ocean currents, marine ecosystems, atmospheric circulation, weather patterns, freshwater resources, and the planetary albedo. Here, we combine Earth Observation data and numerical models to quantify global ice losses over the past three decades across the principal components of Earth’s ice system: Arctic sea ice, Southern Ocean sea ice, Antarctic ice shelves, mountain glaciers, the Greenland ice sheet, and the Antarctic ice sheet. Just over half of the ice loss was from the Northern Hemisphere, and the remainder was from the Southern Hemisphere. The rate of ice loss has risen since the 1990s, owing to increased losses from mountain glaciers, Antarctica, Greenland and from Antarctic ice shelves. During this period, the loss of grounded ice from the Antarctic and Greenland ice sheets and mountain glaciers raised the global sea level by more than 3.5 centimetres. The majority of all ice losses were driven by atmospheric melting (from Arctic sea ice, mountain glaciers, ice shelf calving and ice sheet surface mass balance), with the remaining losses (from ice sheet discharge and ice shelf thinning) being driven by oceanic melting. These data improve knowledge of the state of Earth’s cryosphere, a key climate indicator tracked by the EEA and ECMWF, and can be used to help improve the climate models which support decision making in climate mitigation and adaptation. Earth’s ice is also a major energy sink in the climate system; altogether, these elements of the cryosphere have taken up 3 % of the global energy imbalance. Monitoring Earth’s energy imbalance is fundamental in understanding the evolution of climate change and improving climate syntheses and models, and our improved estimates can contribute towards phase 1 of the UNFCCCs global stocktake required by Article 14 of the Paris Agreement, providing information which can be used in testing the effectiveness of climate mitigation policy.
It is well known that the African rainfall climate is highly variable, both in space and time, with many African societies poorly equipped to manage such variability. Access to long-term and regularly updated rainfall information is therefore essential in both drought and flood monitoring and assessment of long-term changes in rainfall. Since gauge records alone are too sparse and inconsistent over time across many parts of Africa, satellite-based records are the only viable alternative, especially in regions with little or no gauges. The longevity of the Meteosat programme, commencing in the late 1970s and running to the present day, thus provides 40 years of continually updated satellite records for monitoring the current climate and assessing long-term changes in rainfall.
Since the early 1980s, the TAMSAT Group (University of Reading) have been providing locally calibrated, operational rainfall estimates based on Meteosat thermal infra-red imagery for Africa. These rainfall estimates are used in a wide range of applications and sectors, as well as in research. While the essence of the TAMSAT estimation algorithm has changed little in four decades, the TAMSAT Group are continually striving to improve the skill and usability of the rainfall products we create.
In this talk, we will present an overview of the TAMSAT rainfall estimation approach as well as a new robust method for combining contemporaneous rain gauge information with the satellite estimates for improving estimation of rainfall amount. A novel feature of this work includes the estimation of spatially coherent rainfall uncertainty – a quantity which is often neglected in operational products but which can greatly support decision making amongst users, especially during adverse weather events. Such developments in TAMSAT have been developed in collaboration with several African organisations to support climate services in regions extremely vulnerable to climate variability and change. We will also highlight capacity building efforts, supported by the World Meteorological Organisation and leading African organisations responsible for issuing agrometeorological advisories, to help facilitate the uptake of TAMSAT products across Africa.
The German Research Centre for Geosciences (GFZ) maintains the “Gravity Information Service” (GravIS, gravis.gfz-potsdam.de) portal in collaboration with the Technische Universität Dresden and the Alfred-Wegener-Institute (AWI). The essential objective of this portal is the dissemination of user-friendly mass variation data in the Earth system based on observations of the German-US American satellite gravimetry missions GRACE (Gravity Recovery and Climate Experiment, 2002-2017) and its successor GRACE-FO (GRACE-Follow-On, since 2018).
The provided data sets comprise products of mass changes of the ice sheets in Greenland and Antarctica, terrestrial water storage (TWS) variations over the continents, and ocean bottom pressure (OBP) variations from which global mean barystatic sea-level rise can be estimated. All data sets are provided as time series of regular grids, as well as in the form of regional basin averages. The ice-mass change is provided either on a regular 50km by 50km stereographic grid or as basin averages which are accompanied by realistic uncertainties. The gridded continental TWS data, as well as the OBP data, are given on a 1° by 1° grid. For continental TWS data, the user can choose between river discharge basins and segmentation based on climatically similar regions. All regional mean time series of the TWS product are accompanied by realistic uncertainty estimates. The OBP data set is composed of a barystatic sea-level map and a map of the residual ocean circulation which was not reduced by background models in the data processing. These background models are also provided for all three data products.
The data sets of all domains can be interactively displayed at the portal and are freely available for download. This contribution aims to show the features and possibilities of the GravIS portal to researchers without a dedicated geodetic background in the fields of climatology, hydrology, cryosphere, or oceanography. The data provided on the portal will also be used within the GRACE-FO project of the ESA Third Party Mission Program.
The International Soil Moisture Network (ISMN, https://ismn.earth) is a unique centralized global and open freely available in-situ soil moisture data hosting facility (Dorigo et al.,2021: https://hess.copernicus.org/articles/25/5749/2021/). Initiated in 2009 as a community effort through international cooperation (ESA, GEWEX, GTN-H, WMO, etc.), the ISMN is more than ever an essential means for validating and improving global satellite soil moisture products, land surface -, climate- , and hydrological models.
Following, building and improving standardized measurement protocols and quality techniques, the network evolved into a widely used, reliable and consistent in-situ data source (surface and sub-surface) collected by a myriad off data organizations on a voluntary basis. 72 networks are participating (status November 2021) with more than 2800 stations distributed on a global scale and a steadily increasing number of user community, ~ 4000 registered users strong. Time series with hourly timestamps from 1952 – up to near real time are stored in the database and are available through the ISMN web portal for free (https://ismn.earth), including daily near-real time updates from 7 networks (~ 1000 stations).
More than 10’000 in-situ soil moisture datasets are available through the web portal and the number of networks and stations covered by the ISMN is still growing as well as most datasets, that are already contained in the database, are continuously being updated.
The ISMN evolved in the past decade into a platform of benchmark data for several operational services such as ESA CCI Soil Moisture, the Copernicus Climate Change (C3S), the Copernicus Global Land Service (CGLS), the online validation service Quality Assurance for Soil Moisture (QA4SM) and many more applications, services, products and tools. In general, ISMN data is widely used in a variety of scientific fields with hundreds of studies making use of ISMN data (e.g. climate, water, agriculture, disasters, ecosystems, weather, biodiversity, etc.).
The foundation and continuous development of the ISMN was funded by the European Space Agency (formerly SMOS and IDEAS+ programs, currently QA4EO program). However, it was always clear that financial support from ESA was not realizable on a long term basis. Therefore, several different options for financing ISMN where explored within the last couple of years together with ESA.
In January 2021, the German Federal Ministry of Transport and Digital Infrastructure (BMVI: https://www.bmvi.de/EN/Home/home.html) agreed to provide continuous long-term funding for the ISMN operations. Three full time positions are financed at the German Federal Institute for Hydrology (Bafg: https://www.bafg.de/EN/) as well as two full time positions at the connected International Center for Water Resources and Climate Change (ICWRGC https://www.waterandchange.org/en/ - under the auspice of UNESCO and WMO). The transfer of the ISMN operations from Austria (TU Wien) to Germany started in May 2021 and will be finished by end of 2022. This 19 month transfer timeframe is co-financed by ESA and the German Ministry to facilitate a sustainable transfer of knowledge and operations.
In this session, we want to introduce the new hosts (BafG and ICWRGC) and look back at the evolution of the ISMN over the past decade (network and dataset updates, quality procedures, literature overview, and current limitations in data availability – functionality and challenges in data usage). Furthermore, we want to especially look ahead and share new possibilities for the ISMN to serve the EO community for a long time to come.
Climate change indicators are designed to support climate policy making and public discussions. They are important for setting, monitoring and evaluating targets and communicating changes of the investigated phenomenon. Impact indicators highlight how climate change affects certain environmental phenomena. Response indicators show how society adapts to climate change. In Germany, the Environmental Federal Agency (Umweltbundesamt) coordinates the German Adaptation Strategy to climate change. This framework comprises around 100 impact and response indicators in six clusters, i.e., health, water, land, infrastructure, economy and spatial planning/ civil protection. Indicator assessment on a national scale demands comparable data of national scale ; comparability but even availability of environmental data, however, is often challenging. Lakes, for instance, are considered as sentinels of climate change, but nation-wide data for consistent and long time series are rare.
Remote sensing of lakes experiences significant developments during the last decade. Thus, the next report of the German Adaptation Strategy aims to include remote sensing data and methods for the first time. The focus lies on four impact indicators in lakes, namely “presence of cyanobacteria” (cluster health), “beginning of spring phytoplankton bloom”, “lake water temperature” and “ice cover” (cluster water). The aim of our project is to develop an operational, retrospective processing routine based on remote sensing data for these four climate change indicators. We collected a large in-situ database for 25 lakes in Germany, for which we tested and evaluated potentially suited algorithms and sensors. We also discussed with experts and end-users the requirements on sensor-algorithms. Then, we developed different approaches to create and visualise the indicators, i.e., to obtain an easily –to-grasp figure from the remote sensing data. The results are briefly summarised below:
“Presence of cyanobacteria”:
ENVISAT MERIS and Sentinel-3 OLCI data form the data basis, Sentinel-2 is in preparation. The maximum Peak Height algorithm is used to determine presence or absence of cyanobacteria. To aggregate at lake level, we count the days with cyanobacteria presence during the season (March to October) and summer (June to September). As basis for the indicator, we set the number of days with cyanobacteria presence in relation to the number of valid image acquisitions.
“Beginning of spring phytoplankton bloom”:
ENVISAT MERIS, Sentinel-3 OLCI and Sentinel-2 MSI data form the data basis. We calculate chlorophyll-a concentrations using C2X-COMPLEX (Sentinel-2 MSI), merged algorithm derived from Maximum Peak Height following Pitarch calibration (Sentinel-3 OLCI) and C2RCC (ENVISAT MERIS) of all suited imagery acquired from March to May. The percentile 90 is used to aggregate at lake level to detect spatially variable spring blooms. From the time series, we extract the day of year and week of year at which chlorophyll-a concentration peak exceeds the 70 percentile for the first time during spring. This date then is considered as beginning of spring bloom.
“Lake water temperature”:
The Landsat 5 TM, 7 ETM and 8 TIRS thermal data form the data basis. We selected the mono-window algorithm by Sobrino/ Jimenez-Munoz combined with ERA5-Land data to retrieve lake surface water temperature. Investigation on Landsat-8 collection-2 performance is ongoing. The subsequent data analysis homogenises the results to Landsat 8 and filters outliers. The median is used to aggregate to lake level, which are then temporally averaged to monthly data. We interpolate missing monthly data if gaps are not exceeding 1 month. Gaps occur over all the year due to low revisit time of Landsat and cloud coverage. Yearly seasonal (March to October) and summer averages (June to August) are the basis for the indicator.
“Ice cover”:
Landsat 8 OLI, Sentinel-2 MSI and Sentinel-1 data form the data basis. We developed sensor-specific random forest classification models to separate ice and water and mask out clouds (only optical imagery). To aggregate to lake level, we determine days when ice covers more than 80 % of the lake. Then we count the number of ice days and calculate the ratio of ice days and the number of valid image acquisitions.
Based on the above-mentioned approaches and discussions with stakeholders, we developed a framework to evaluate the data quality for the indicators. This framework indicates spatial and temporal measures of data coverage for assessing the representativeness of a value to be included in the long-term trends. Such quality measures support calculating reliable trends. Currently, we transfer the developed approaches into a retrospective, operational service for the German Environmental Agency using the cloud-processing structure of CODE-DE (National Collaborative Ground Segment). In a next step, we calculate trends and examine whether similar patterns can be derived among groups of lakes or on a national level.
Our presentation will focus on the transfer of pixel-based information into a climate change indicator, the experienced challenges, but also the new opportunities.
An accurate monitoring of the snow-albedo feedback is essential for understanding the effects of climate change in snow-covered regions. The IPCC’s Sixth Assessment Report (AR6) established that a surface albedo feedback in the range of +0.35 [0.10 to 0.60] Wm-2C-1 is very likely [1]. The main component of this feedback is the so-called snow/ice-albedo feedback, which until AR5 was analyzed independently. AR6 included as well temperature-induced albedo changes over snow-free surfaces. Snow/ice-albedo feedback has been generally monitored with global climate models (GCMs). The increasing availability of satellite observations provides new opportunities to reduce the uncertainty in the snow-albedo feedback estimates, and also to improve its understanding by separating the contribution of ice and snow, and within snow, by separating the contribution of snow cover retreat and snow metamorphosis [2]. Indeed, observations are being increasingly used either to constrain GCMs [1], or to estimate the snow albedo feedback directly from multi-decadal observations [3].
Two types of observational products are currently being used: satellite-based products and global reanalyses. However, both face stability challenges that need to be quantified to understand the uncertainty of the snow-albedo feedback estimates obtained. Satellite products concatenate different sensors (e.g., C3S albedo) or different versions of the same sensor (e.g., AVHRR, VGT), which can introduce discontinuities during the transition periods. For each sensor, orbital drifts and instrument degradation are also a problem. Additional instabilities are added by the retrieval algorithm and the snow mask used. Besides, the uncertainty of albedo retrievals increases over snow due to the highly anisotropic reflectance of snow and the generally low solar angles during snow albedo retrievals.
Stability issues in reanalysis are related to the addition of new observations (satellite or ground) into the data assimilation system. Reanalyses face a trade-off between accuracy and stability that depends on the weight they give to new observations. NWP initialization applications require more accurate estimations that are obtained by weighting more recent observations, which generally introduces temporal instabilities in the long-term. By contrast, climate applications prefer stability over accuracy. Therefore, instabilities of different degrees can be present in reanalysis products based on the approach undertaken.
Our goal is to evaluate if the existing satellite and reanalysis products are fit for monitoring the snow-albedo feedback. The satellite products evaluated are MCD43C3 v6.1 (2000-present), CLARA-A2.1 (1982-present), GLAS-AVHRR v4 (1982-present), and C3S v2 (1982-present). The reanalyses evaluated are ERA5 (1950-present), ERA5-Land (1950-present), MERRA-2 (1982-present), and JRA-55 (1958-present). First, we evaluate if snow albedo values and trends from the different products are consistent globally. Then, we quantify how instabilities and incostinencies in multi-decadal albedo datasets propagate to the snow-albedo feedback estimates. For that, we generate an independent estimate of the snow-albedo feedback from each product using a common radiative kernel [4]. Our final aim is to determine whether the existing products area accurate and stable enough, and to identify aspects that can be improved to reduce the uncertainty of snow-albedo feedback estimates.
Bibliography
[1] IPCC, Climate Change 2021: The Physical Science Basis Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte V, Zhai P, Pirani AS, Connors SL, Péan C, Berger S, Caud N, Chen Y, Goldfarb L, Gomis MI, Huang M, Leitzell K, Lonnoy E, Matthews JBR, Maycock TK, Waterfield T, Yelekçi O, Yu R, Zhou B (eds.)]. Cambridge University Press. In Press.]
[2] Wegmann M, Dutra E, Jacobi HW, Zolina O. Spring snow albedo feedback over northern Eurasia: Comparing in situ measurements with reanalysis products. The Cryosphere 12, 1887-1898, 2018
[3] Xiao L, Che T, Chen L, Xie H, Dai L. Quantifying snow albedo radiative forcing and its feedback during 2003–2016. Remote Sensing 9, 883, 2017
[4] Pithan F, Mauritsen T. Arctic amplification dominated by temperature feedbacks in contemporary climate models. Nature Geoscience 7, 181-184, 2014.
Cities are warmer than their surroundings. This phenomenon is known as Urban Heat Island (UHI) and is one of the clearest examples of human-induced climate modification. Surface UHIs (SUHI) result from modifications of the surface energy balance at urban facets, canyons, and neighborhoods. The difference between urban and rural Land Surface Temperatures (LST)—known as SUHI Intensity (SUHII)—varies rapidly in space and time as the surface conditions, the weather, and the incoming radiation change, and is generally strongest during daytime and summertime. In this work we revisit the topic of SUHII seasonality and how it differs across climates. Our thesis is that aggregating global SUHII data without considering the biome (i.e., vegetation zone) of each city, can lead to erroneous conclusions and estimates that fail to reflect the actual SUHII characteristics. This is because SUHII is a function of both urban and rural features, and the phenology of the rural surroundings can differ considerably between cities even in the same climate zone. To test this hypothesis, we use 18 years (2000-2018) of global land cover and MODIS LST data from the European Space Agency’s Climate Change Initiative (ESA-CCI). Our analysis covers 1588 cities in 12 tropical, dry, temperate, and continental Köppen-Geiger sub-classes. This classification scheme empirically maps Earth in 5 main and 30 sub- classes by assuming that vegetation zones reflect climatic boundaries. To analyze our results, we calculate, for each climate class, the seasonal variation of SUHII and rural LST (at monthly resolution) by averaging the corresponding city data (we do this separately for daytime and nighttime). Our results reveal that the seasonality of tropical, dry, temperate, and continental SUHIs differs considerably during daytime and that it is more pronounced in temperate and continental climates. They also show that the seasonality of the dry and temperate sub-classes exhibits considerable intra-class variation. In particular, the month when the daytime SUHII is strongest can differ between temperate sub-classes by as much as 4 months (e.g., for the hot-Mediterranean sub-class it occurs in May and for the dry-winter subtropical highlands sub-class in September), while the corresponding SUHII magnitude by as much as 2.5 K. The strong intra-class variation of temperate climates is also evident in the corresponding hysteresis loops, where almost every sub-class exhibits a unique looping pattern. These finding support our thesis, and suggest that global SUHII investigations should consider, in addition to climate, and the distribution of biomes when aggregating their results. Our results provide the most complete typology of SUHII hysteresis loops to date and an in-depth description of how SUHIIs vary within the year across climates.
The present work shows the potential of satellite thermal observations to estimate Earth’s global surface temperature trends and, therefore, its applicability to climate change studies. Present satellites allow estimation of surface temperature for a full coverage of our planet with a sub-daily revisit frequency and kilometric resolution. In this work, a simple methodology is presented that allows estimating the surface temperature of Planet Earth with MODIS Terra and Aqua land and sea surface temperature products, as if the whole planet was reduced to a single pixel. The results corroborate the temperature anomalies retrieved from climate models and show a rate of warming higher that 0.2 °C per decade. In addition Earth’s surface temperature are analysed in more detail over the period 2003-2021 by dividing the globe into the northern (HN) and southern (HS) hemispheres, and each hemisphere into three additional zones: the low latitudes from the Equator to the Tropic of Cancer in the HN and Tropic of Capricorn in the HS (0-23.5⁰), mid latitudes from the Tropics to the Arctic Circle in the HN and Antarctic Circle in the HS (23.5⁰-66.5⁰) and high latitudes from the Arctic and Antarctic Circles to the Poles (66.5⁰-90⁰).
Lake ice cover (LIC), a thematic variable under Lakes as an Essential Climate Variable (ECV) that is a robust indicator of climate change and plays an important role in lake-atmosphere interactions at northern latitudes (i.e. heat, moisture, and gas exchanges), refers to the area (or extent) of a lake covered by ice. Ice dates and ice cover duration at the pixel scale (ice-on and ice-off) and lake-wide scale (complete freeze-over (CFO) and water clear of ice (WCI)) can be derived from lake ice cover data (Duguay et al. 2015). Determination of ice onset (date of the first pixel covered by ice), CFO, melt onset (date of the first pixel with open water), and WCI are of most relevance to capture important ice events during the freeze-up and break-up periods. Duration of freeze-up and break-up periods and duration of ice cover over a full ice season can be determined from these dates. The generation of a LIC product from satellite observations requires the implementation of a retrieval algorithm that can correctly label pixels as either ice (snow-free and snow-covered), open water, or cloud. The LIC product v2.0 generated for Lakes_cci (https://climate.esa.int/en/projects/lakes/) uses MODIS Terra/Aqua data to provide the most consistent and longest daily historical record globally to date (2000-2020). The new product provides three bands: Band 1 - lake ice cover flag (lake forms or does not form ice); Band 2 - lake ice cover class (open water, ice, cloud, and bad); and Band 3 - lake ice cover uncertainty (% accuracy for each of open water, ice and cloud classes).
In the first step of production, the Canadian Lake Ice Model (CLIMo) was applied to help determine which lakes of the Lakes_cci harmonized product (total 2024 lakes), which includes four other variables (water level, water extent, surface water temperature, and water-leaving reflectance), could have formed ice or have remained ice-free at any time over the 2000-2020 period. This step can correct false detection of ice in summer in the situation of dry lakebeds and reduce the computational cost of the production. CLIMo (Duguay et al. 2003) is a one-dimensional thermodynamic model capable of simulating ice phenology events, ice thickness and temperature, and all components of the energy/radiation balance equations during the ice and open water seasons at a daily timestep. Input data to drive CLIMo include mean daily air temperature (°C), wind speed (m s-1), relative humidity (%), snowfall (or depth) (m), and cloud cover (in tenth). Here, European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5 reanalysis hourly data on single levels (0.25-degree grid) were used to generate inputs required for CLIMo simulations for each of the 2024 lakes. Lake ice depth data provided by ERA5 were also utilised to check for the possible formation of ice on any of the lakes. Ice cover was deemed possible to have formed on a lake if ice depth was determined to have reached a thickness greater than 0.001 m on any day from either CLIMo or ERA5. Additionally, as a third check, a number of lakes (largely located at the southern limit of where ice could potentially form during a cold winter in the Northern Hemisphere and in mountainous regions of both the Northern and Southern hemispheres) were inspected manually through interpretation of MODIS RGB images to determine if any of these lakes had formed ice between 2000 and 2020. As a result of the process described above, presented in the variable of lake ice cover flag of the LIC product v2.0, 1391 of 2024 lakes were flagged as forming an ice cover and 633 not forming any ice over the 2000-2020 period. Once flagged, only lakes determined to form ice were selected to perform lake ice classification from MODIS data by the main processing chain.
MODIS TOA reflectance bands and the solar zenith angle (SZA) band are used for feature retrieval (i. e. for labeling as water, ice, or cloud) (Wu et al. 2021). The reflectance bands are MOD02QKM at 250 m (band 1: 0.645 µm and band 2: 0.858 µm) and MOD02HKM at 500 m (band 3: 0.469 µm; band 4: 0.555 µm; band 5: 1.240 µm; band 6: 1.640 µm; band 7: 2.130 µm) resolutions. Prior to retrieval, pixels of interest are identified as “good” or “bad” using quality bands from the original MODIS TOA reflectance product. The pixels with SZA greater than 85 degrees are identified as “bad”. Pixels of interest are classified and labelled as either cloud, ice, or water from a random forest algorithm (Wu et al. 2021). Labelled pixels are resampled to the output grid. The processing chain has been revised for Lakes_cci to generate the output grid based on specifications of the harmonized product (1/120th degree latitude/longitude; ca. 1 km). Aggregation is performed by taking a majority vote between ice and water, ties broken by selecting water. If there are zero ice and water pixels, then the cell is labelled as cloud if there are non-zero cloud pixels; otherwise, the output cell is labelled as “bad”. The variable of lake ice cover class presents the retrieved labels.
Validation of the LIC V2.0 product has been performed through the computation of confusion matrices built on independent statistical validation. The reference data for validation were collected for water, ice, and cloud as AOIs from the visual interpretation of the MOD02/MYD02 false color composite images (R: band 2, G: band 2, B: band 1) with a 250 m spatial resolution. A total of 10,075,081 pixels taken from 229 MOD02 swaths over Great Slave Lake and Lake Onega were used to conduct classification assessment of the LIC product generated by MODIS Terra. There is no notable difference in the accuracy of the product between the break-up (98.14% overall accuracy) and freeze-up (96.83% overall accuracy) period. Additionally, 1,665,188 samples collected from MYD02 false color composite images were applied for the validation of the LIC product produced from MODIS Aqua. The overall accuracy of 97.68% reached with Aqua data is comparable to that obtained with MODIS Terra data. Further evaluation of the Lakes_cci LIC V2.0 product and its comparison with other products is planned in the future, and with input from the user community.
References
Duguay, C. R. Bernier, M. Gauthier, Y. & Kouraev, A. (2015). Remote sensing of lake and river ice. In Remote Sensing of the Cryosphere, Edited by M. Tedesco. Wiley-Blackwell (Oxford, UK), 273-306.
Duguay, C.R., Flato, G.M., Jeffries, M.O., Ménard, P., Morris, K. & Rouse, W.R. (2003). Ice cover variability on shallow lakes at high latitudes: Model simulations and observations. Hydrological Processes, 17(17), 3465-3483.
Wu, Y., Duguay, C.R. & Xu, L. (2021). Assessment of machine learning classifiers for global lake ice cover mapping from MODIS TOA reflectance data. Remote Sensing of Environment, 253, 112206, https://doi.org/10.1016/j.rse.2020.112206.
The Greenland Ice Sheet has had a negative mass balance over at least the last two decades, during which there has been a well documented increase in the retreat of the ice sheet. Increased dynamic thinning and lower surface mass balance are roughly equally important mechanisms behind the continuous reduction of the Greenland Ice Sheet, with the latter largely being driven by enhanced melt and run-off rates. In the continuous effort to better simulate the evolution of the Greenland Ice Sheet under different climate change scenarios, models calculate the surface energy budget and convert this to ice surface temperature (IST) in order to calculate melt and run-off. Accurately characterising the ice surface temperature is essential, as it regulates surface melt and run-off through various mechanisms.
Surface temperature monitoring over the polar regions is impeded by harsh environmental conditions, making in situ monitoring challenging and scarce. Space-borne retrievals of ice surface temperature are challenging due to complications from persistent cloud cover, large daily temperature variations and the lack of high quality in-situ observations for validation. Nonetheless, a continuous effort calibrating and harmonising the extended archive of surface temperatures from various sensors has now resulted in comprehensive IST datasets spanning over nearly four decades. A significant part of these datasets are available in satellite processing levels L2 (swath) and L3 (gridded on regular grid) yet with gaps due to cloud cover. Optimally interpolated products offer gap-free fields, typically on a daily basis and while there is a suite of global coverage datasets, few have specifically been developed for the Arctic region.
This study reports from a user case study (UCS) conducted within the ESA CCI LST project. The aim of the UCS was to use the L2 ESA CCI LST products along with the L2 Arctic and Antarctic Ice Surface Temperatures from thermal Infrared satellite sensors (AASTI) v2 dataset, to develop a L4 optimally interpolated, multi-sensor, gap-free, surface temperature field for the Greenland Ice Sheet. The L4 product was produced daily for the year 2012 with a spatial resolution of 0.01 degree latitude and 0.02 degree longitude. Prior to the generation of the gap-free daily fields, the upstream input data were inter-compared and a cold bias for LST CCI MODIS retrievals was identified and corrected against the AASTI dataset. All L2 input data along with the derived product were validated using observations from the PROMICE automated weather stations (AWS) on the Greenland Ice Sheet as well as the IceBridge flight campaigns. L2 AASTI and the L4 OI field shared similar bias and standard deviation values, while MODIS demonstrated a cold bias. The L4 OI fields were used to examine the monthly and seasonal variability of IST during 2012 when a significant melt event occurred. Mean surface temperature for July was around zero for the largest part of the Greenland Ice Sheet, based on the aggregation of 200 to 700 observations depending on the region. Melt days, defined as days when IST was -1°C or higher, ranged between 5 and 10 for the central part of the Greenland Ice Sheet and exceeded 30 for the middle and lower zones in the periphery of the ice sheet. The L4 OI product was assimilated into a surface mass balance (SMB) model of the Greenland Ice Sheet to examine the impact of the multi-sensor, gap free dataset on modelled snowpack properties that account for important effects including refreezing and retention of liquid water for the test year of 2012.
The surface soil moisture (SM) state impacts the sphere of human-nature interaction at different levels. It contributes to changing the frequency and extent of extreme atmospheric events such as heatwaves, and it affects the ecosystem state which anthropogenic activities depend on. SM is therefore recognized as an Essential Climate Variable (ECV). Its monitoring at scales from multi-decadal to near-real time (NRT) benefits study fields as diverse as agricultural crop yield forecasting, wildfires prediction or drought and flood risk management.
The European Commission’s Copernicus Climate Change Service (C3S) includes a soil moisture data set that is regularly updated to support timely decision making. The C3S SM product is freely made available through the Copernicus Climate Data Store with a global coverage at daily, 10-daily and monthly aggregation levels. It integrates multiple NRT data streams for this purpose: The Land Parameter Retrieval Model (LPRM; Owe et al., 2001) is used to derive SM from operational satellite radiometers (AMSR2, SMAP, SMOS and GPM). EumetSat HSAF produces a scatterometer based SSM product from ASCAT sensors (on board of Metop-A/B/C) with a short delay (HSAF, 2019). Using a modified version of the ESA CCI SM merging algorithm (Gruber et al., 2019), C3S SM can therefore provide an ACTIVE (scatterometric), PASSIVE (radiometric) and COMBINED product with a short delay of 10-20 days. The C3S SM algorithm is updated on an annual basis with the latest scientific improvements from ESA CCI SM. Products are validated with in-situ measurements from the International Soil Moisture Network (ISMN; Dorigo et al., 2021) and reanalysis reference data using the QA4SM online validation service. Assessment reports are distributed with the data sets.
Several derived services can greatly profit from the use of C3S SM due to its short update delay. One outstanding example is the detection and estimation of precipitation amounts at the regional scale as performed in the SM2RAIN project (led by Italy’s IRPI-CNR institute), with applications in drought and flood analysis and management. Similarly, the impact of climatic extremes on food security can be mitigated using the scientific knowledge basis provided by C3S SM, as demonstrated in the EarthFoodSecurity service. This presentation will cover the climate service provided with C3S SM, including the input data streams, the processing and distribution of the products and their quality assessment; the impact and external applications of the service will also be covered.
The development of the ESA CCI products has been supported by ESA’s Climate Change Initiative for Soil Moisture (Contract No. 4000104814/11/I-NB and 4000112226/14/I-NB) and the European Union’s FP7 EartH2Observe “Global Earth Observation for Integrated Water Resource Assessment” project (grant agreement number 331 603608). Funded by Copernicus Climate Change Service implemented by ECMWF through C3S 312a/b Lot 7/4 Soil Moisture service.
References
Dorigo, W., Himmelbauer, I., Aberer, D., Schremmer, L., Petrakovic, I., Zappa, L., ... & Sabia, R. (2021). The International Soil Moisture Network: serving Earth system science for over a decade, Hydrol. Earth Syst. Sci., 25, 5749–5804, https://doi.org/10.5194/hess-25-5749-2021, 2021.
Gruber, A., Scanlon, T., van der Schalie, R., Wagner, W., & Dorigo, W. (2019). Evolution of the ESA CCI Soil Moisture climate data records and their underlying merging methodology. Earth System Science Data, 11(2), 717-739.
H-SAF (2019) ASCAT Surface Soil Moisture Climate Data Record v5 12.5 km sampling - Metop (H115), EUMETSAT SAF on Support to Operational Hydrology and Water Management, DOI: 10.15770/EUM_SAF_H_0006.
Owe, M., de Jeu, R., & Walker, J. (2001). A methodology for surface soil moisture and vegetation optical depth retrieval using the microwave polarization difference index. IEEE Transactions on Geoscience and Remote Sensing, 39(8), 1643-1654.
The Microwave Radiometer (MWR) represents a series of nadir viewing instruments whose main purpose is to provide the information required to correct ocean altimeter observations for the highly variable effects of atmospheric water vapour (the ‘wet tropospheric correction (WTC)’). MWR instruments have been flown onboard the ERS-1 (1991-2000), ERS-2 (1995-2011), and Envisat (2002-2012) platforms and are recently flown again onboard the Sentinel-3 series of satellites (S3-A 2016 - ongoing, S3-B 2018 - ongoing).
The MWR instrument also allows for an accurate determination of the atmospheric total column water vapour (TCWV), under clear and cloudy sky conditions, during both day and night.
In our presentation, we report on recent activities to derive a consistent high-quality long-term TCWV and WTC dataset from MWR observations. A novel bias correction method is applied to create bias-free cross-instrument brightness temperature time series as well as the corresponding TCWV values using a 1D-VAR approach.
The aim of these activities is to create a TCWV and WTC data record that covers the entire 30+ year period from 1991 to 2021 (except for the four year data gap between Envisat and S3-A).
Aside from its immediate contribution to altimetry, MWR-derived TCWV retrievals have the potential to play an important role in climatology and the validation of other TCWV retrievals.
The Copernicus Atmosphere Monitoring and Climate Change Services (CAMS and C3S respectively), two of the six core Services of the Copernicus programme, enter an exciting new phase with the signature of a Contribution Agreement between ECMWF and the European Commission in July 2021.
Both Services are fully operational and routinely deliver a wide variety of environmental products, based on Sentinel and other satellite data, in-situ observations and modelling information. These data and products are accessed by hundreds of thousands of users.
A unique and strong point of ECMWF Copernicus Services is to focus on delivering operationally “authoritative” data, via their Copernicus data stores. A unique and strong point of C3S is to focus on delivering operationally “authoritative” data, via its climate data store. C3S provides authoritative information about the past, present and future climate, as well as tools to enable climate change mitigation and adaptation strategies by policy makers and businesses, while CAMS delivers consistent and quality-controlled information related to air pollution and health, solar energy, greenhouse gases and climate forcing, everywhere in the world. A prime user of both Services is the European Commission itself, and CAMS and C3S strive to support the policy makers and public authorities by providing the environmental information they need to inform their policies and legislation, etc. which becomes critically important in view of following up the Paris Agreement, and supporting the UNFCCC Sustainable Development Goals, the Sendai Framework for Disaster Risk Reduction and the Green Deal, to name a few.
This presentation will provide an overview of the State-of-Play of both Services and their foreseen evolution over the next seven years. We will particularly emphasize the development of the new anthropogenic CO2 emissions Monitoring and Verification Support Capacity (CO2MVS) which will combine satellite observations with modelling information to enable users to precisely pinpoint which component of emissions are resulting from human activity.
Today, information to support carbon emission control and carbon assimilation by forests is of very variable quality. The information sources are versatile: different types of field data, aerial photography, laser scanning data, and satellite imagery. This information is used as input for calculation models using which it is decided whether forest is a carbon sink or an emission source. The results can be further used to value the carbon on the growing market of the voluntary carbon trade.
In future, it will be ever more important that forest owners, governments, academia, investors, and organizers of the voluntary carbon market and can base their decisions on as accurate, reliable and comparable information as possible and that the information is easily accessible.
In the VTT-led Horizon 2020 Innovation Action project Forest Flux a service was developed to offer reliable and comparable information on forest resources and forest carbon. Forest Flux cloud service on the Forestry Thematic Exploitation Platform F-TEP includes a seamless service chain from field observations and satellite imagery. It produces estimates of the present forest resources and carbon assimilation on a certain area and their future forecasts. The forecasts can be computed by applying different climate scenarios. It is to our knowledge the first of its kind globally.
The main satellite data source was Sentinel-2 of the Copernicus program. Additional data sources included very high-resolution optical imagery and airborne laser scanning (ALS) data. Ground reference data were provided by the users or were acquired from open sources.
The services were offered for nine users in Finland, Germany, Portugal, Romania, Paraguay, and Madagascar located in the boreal, temperate, and tropical vegetation zones. The user types included private and governmental large forest owners and managers, forest industries, associations of forest owners, and a development aid organization.
The users could select their desired map products from the portfolio of 51 alternatives. They included natural and color infrared image maps, forest cover map, nine traditional forest structural variables, site fertility type, three change map types, five forest fragmentation variables plus five variables for their changes, four biomass variables, nine carbon flux variables plus nine variables indicating their change, and eight variables to forecast the biomass and carbon assimilation. In addition, statistical information on the carbon balance of an organization was computed. Inputs for the organizational carbon balance were, in addition to the satellite image based carbon assimilation products, emissions from the silvicultural measures, harvest, and transportation, provided by the user.
The main method for satellite image analysis was the in-house probability software whose benefit is its adaptivity to different quality and amount of reference data because the models can be checked and modified manually (Häme et al., 2001, 2013). For the mapping of change, another in-house tool Autochange was used (Häme et al., 2020).
The process model PREBAS was used to compute and forecast the primary production variables. It used as inputs the outputs of the structural variable estimation and daily data on temperature and precipitation (Minunno et al., 2019; Tian et al., 2020). The model that was originally developed for boreal forest was parametrized for several other species that grew in the study sites. Comparison of the model predictions with flux tower measurements indicated voi very good match.
Software components for the Forest Flux services we developed for the F-TEP platform where they are applicable for operational services. The processing chain is largely automated. The main challenges were the variable quality, amount, and formats of the reference data as well as residual clouds in the pre-processed imagery, which led to manual work in the development of the models for the estimation of the structural variables. The uncertainty of the results was computed using a random sample from the reference data. However, in some cases, the reference data were not adequate for an independent set for uncertainty assessment and the results had to be assessed using the training data.
The relative root mean square error (RMSE) for the growing stock volume estimation varied between 29% and 67%. The error was always smaller for the other estimated structural variables stem basal area, mean height, and stem diameter than for volume. The bias was usually few percent with an exception at two sites in the same country where the overestimation was over 20% computed with a limited reference data. In Finland, the pure Sentinel-2 based estimation provided a relative error of 45%. By including the ALS data in the model, the RMSE dropped to 31%.
In total, about 1300 raster maps at ten-meter pixel size or vector outputs were computed in two phases. The user feedback was collected after both phases. In the short term, the most desired services concern forest change and the traditional structural variables and biomass. Carbon market is still poorly developed but this is expected to grow fastest within coming few years due to international regulations, and pressure from company shareholders and public.
The three-year Innovation Action project Forest Flux started in 2019 and it was completed in November 2021. The operational services can be started immediately after the completion of the project.
Project partners, in addition to VTT Technical Research Centre of Finland Ltd. were Unique Land Use GmbH (DE), Simosol Oy (FI), University of Helsinki (FI), Instituto Superior De Agronomia (PT), and The National Institute for Research and Development in Forestry (RO). The project was supported by the Horizon2020 Program of the EU, Grant Agreement #821860.
https://www.forestflux.eu/
https://f-tep.com/
Häme, T. et al. (2001) ‘AVHRR-based forest proportion map of the Pan-European area’, Remote Sensing of Environment, 77(1), pp. 76–91. doi: 10.1016/S0034-4257(01)00195-X.
Häme, T. et al. (2013) ‘Improved mapping of tropical forests with optical and sar imagery, part i: Forest cover and accuracy assessment using multi-resolution data’, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(1), pp. 74–91. doi: 10.1109/JSTARS.2013.2241019.
Häme, T. et al. (2020) ‘A Hierarchical Clustering Method for Land Cover Change Detection and Identification’, Remote Sensing. MDPI AG, 12(11), p. 1751. doi: 10.3390/rs12111751.
Minunno, F. et al. (2019) ‘Bayesian calibration of a carbon balance model PREBAS using data from permanent growth experiments and national forest inventory’, Forest Ecology and Management. Elsevier B.V., 440, pp. 208–257. doi: 10.1016/j.foreco.2019.02.041.
Tian, X. et al. (2020) ‘Extending the range of applicability of the semi‐empirical ecosystem flux model PRELES for varying forest types and climate’, Global Change Biology. Blackwell Publishing Ltd, 26(5), pp. 2923–2943. doi: 10.1111/gcb.14992.
Flooding is recognised as an environmental hazard that affects more people than any other environmental hazard. It is also anticipated to affect a higher proportion of the global population and incur rising costs in the future due to rapid urbanization, increasing settlements in floodplains, climate change, and variability.
To meet these challenges Previsico have developed their FloodMap Live software to provide high resolution, real-time flood forecasts based on a predictive flood modelling system. Flood forecasts, such as those provided by Previsico, enable actions to be taken to reduce loss of life and property in the event of a flood and help to identify genuine insurance claims post-flood.
Yet flood models require integration and validation with external data sources, such as satellite imagery, for re-calibration of model predictions and to demonstrate prediction effectiveness. Independent information from satellite data enables refinements to be made to flood models, in turn supporting more accurate forecasts of flooding evolution.
Following a successful collaboration with the University of Leicester, Previsico is developing a flood extent product derived from Sentinel 1 radar imagery that will provide near-real-time information on flood location and extent in both urban and rural areas. Synthetic Aperture Radar (SAR) was chosen for the satellite product as data collection is not impeded by cloud cover or a lack of illumination and can acquire data over a site during day or night time under almost all weather conditions. Furthermore the Sentinel 1 SAR-C instrument provides dual polarisation capability, very short revisit times and rapid product delivery. This satellite product will allow Previsico to refine it’s model in order to offer more accurate and validated flood models to their customers ensuring they can respond to a flood event in a targeted and efficient manner.
Here we will present our progress so far in developing and utilising Sentinel 1 SAR data to refine and validate a commercial flood model. Results from the initial version of this Sentinel 1 flood product were encouraging, as shown in Figure 1 for an area over Doncaster and Rotherham in the UK which were affected by flooding in November 2019. The method also performed well in non-flood events suggesting it is fairly robust even when inundation has not occurred. Further comparisons against external data sources such as Copernicus EMS showed promise and allowed us to identify improvements to the code, either to be implemented in the prototype product or in future versions of this product. Comparisons to flood forecasts from Previsico’s flood modelling system were also performed and the results from this will be presented.
This presentation will present the plans of EUMETSAT's Network of Satellite Application Facilities (SAFs) for the period 2022 - 2027. The SAF Network consists of eight Satellite Application Facilities dedicated to provide operational services for specific application areas. One element is the sustained generation of Climate Data Records from satellite data to support climate science and climate services. In 2021 the commitments for a fourth "Continuous Development and Operations Phase (CDOP4) have been approved. An overview of the Climate Data Record portfolio, the applied concepts as well as the applications examples will be presented.
With the rising awareness and visibility of impacts caused by climate change and linked extreme weather events, the need for rapid dissemination of and access to information is becoming a progressively pressing matter in many different anthropogenic, social and economic sectors. To meet these needs, the existing wealth of free, open and globally available analysis-ready weather and climate data serves as a valuable source.
However, the lack of understanding how to access, handle and combine data sets from different sources often prevents end-users in the previously mentioned sectors from making use of the data. Furthermore, domain-specific knowledge to extract additional information out of climate and weather data is hardly existing.
Because of the global interconnection of supply-and-demand chains, the food commodity sector is one of the most vulnerable economic sectors to the effects of extreme weather events. The timely identification of abnormal weather and weather risks is a key point in guaranteeing stable supplies by pointing out geographic areas under risk. This risk assessment is directly supporting the accomplishment of United Nation’s sustainable development goal (SDG) 2, particularily by backing the achievement of food security. Realizing the latter has been the main focus during the development of our web application, through supporting stakeholders in the food commodity trading sector in planning advance purchases of supplies. An early planning of purchasing volumes is necessary to prevent disruptions in supply chains and sudden price increases for consumers of final goods.
Over the course of the past years, green spin has been developing, in close exchange with users in the food commodity industry, a web-based application in which data from the Copernicus Climate Change Service (C3S), Copernicus Land Monitoring Service (CLMS), the German Weather Service (“Deutscher Wetterdienst”, DWD), National Oceanic and Atmospheric Administration (NOAA), Global Inventory Monitoring and Modeling System (GIMMS), MODIS and SMAP satellites have been combined to not only provide access to the data but also to extract information to support decision making processes.
Based on the extracted user needs, the systemic knowledge of crop cultivation cycles (mainly wheat, corn and rice) and constant evaluations during the development phase, it has been found that the following parameters contain useful information:
1) Parameters with daily temporal resolution: precipitation (DWD), temperature (DWD), soil water index (CLMS), leaf area index (MODIS), vegetation health index (NOAA), NDVI anomalies (GIMMS), snow cover extent (MODIS) and snow mass (SMAP)
2) Parameters with monthly temporal resolution: temperature 3 months forecasts (C3S), precipitation 3 months forecasts (C3S)
The whole processing pipeline is fully automated and includes downloading, conversion, cleaning of data errors and data extraction. Prepared data are then stored in data bases, checked for integrity and completeness and can be accessed via APIs. So far, data since the year 2000 have been integrated (with exception of soil water index which is only available since 2007). All daily input data are aggregated on administrative levels, ranging from district level (corresponding to “Kreise” in Germany) up to country level, and integrated as interactive maps into the web application. Parameters with a monthly resolution are displayed as continuous vector maps, since they are mostly used as approximation for a quick global assessment of potential medium-range climate developments. This extensive framework has been operational for two years and is continuously evaluated regarding the inclusion of new data.
In addition to data visualization via interactive maps, the data was further used as input for a weather-based risk indicator and the modelling of production, yield and area of specific crops. It was necessary to integrate those additional parameters in order to make the transition from a mere data visualization application to an actively used application in which EO climate and weather data and derived products truly serve as a basis for decision-making for stakeholders in non-EO disciplines.
Other existing solutions like GADAS or geoGLAM provide very good overviews on a number of different parameters and data sets, but are oftentimes difficult to use and interpret due to a high level of complexity. For example, the better part of such portals doesn’t provide analysis tools with which the plethora of displayed parameters and indices can be inter-compared in time as well as in space by the end-user.
Therefore, we developed our web application as a “collaboration framework” with the goal of enabling non-EO users to access EO data and derived information through a solution-oriented approach. On the one hand, our approach does leave out the raw raster data which can be seen as information loss. But on the other hand, the resulting gain in simplicity of interpretation enabling the simplified derivation of information caused by the means of data condensation (spatial aggregation) largely outweighs the perceived loss of information. This approach is based on constant exchange with users, leading to constant adjustments in the application.
One example of a simplified derivation of information is the above-mentioned weather-based risk indicator. It is used to spot “risk areas” on a sub-national scale (first level below country level). The aim of these risk areas is to identify regions in the world where crop areas are under risk due to extreme weather events. Therefore, different existing algorithms have been analyzed to find a representative index which not only provides more information for the risk assessment task but also can be interpreted and understood correctly by non-expert end-users.
The calculation of the risk indicator is based on the computation of the Standard Precipitation Index (SPI) from the National Drought Mitigation Center. Instead of using only precipitation data, additionally temperature and soil water index data were used. Thereby, an index was created which is especially adapted to and targeted on plant growth. Crop production and area statistics are used in order to grade the severeness of the detected risk in an area. The more crop production proportionally present in an area, the higher the severity.
Compared to other existing data portals and applications, the following novel features have been introduced:
• Information is made available for countries and lower administrative units for which data is typically not available (e.g., provincial data for Russia and China)
• Possibility to not only compare time series of different parameters among each other but also to put them directly in relation to crop harvest quantities, enabling the merge of the subjective knowledge of end-users with objective data
• Detection and monitoring of extreme weather events, including their impact on the development of crop production
In conclusion, with this web application we show:
• how EO data can be made accessible for a range of applications (EO- and non-EO related)
• how new parameters from already used or new data providers can be flexibly integrated into the operational data processing pipeline and visualization
• a “best practice” approach how to condense data into useful information with a focus on facilitating comprehension for decision makers from non-EO fields
As an outlook, we are testing the expansion of the presented risk indicator with the integration of temperature and precipitation forecasts as well as population data as an additional measure for assessing severity. These modifications are intended to make the risk indicator more by taking into account the differences between the global (supply chain management) and the local ("direct-to-food" production) context of application.
Atmospheric ozone is an Essential Climate Variable (ECV) monitored in the framework of the Global Climate Observing System (GCOS), among others due to its impact on the radiation budget of the Earth, its chemical influence on other radiatively active species, and its role in atmospheric dynamics and climate. Its importance in the context of climate change has led ECMWF to set up a dedicated procurement of state-of-the-art ozone Climate Data Records (CDRs) to the Climate Data Store (CDS) of the Copernicus Climate Change Service (C3S), mainly in the form of level-3/4 gridded data products. In support, ESA ensures the round-robin selection, reprocessing, and further improvement of the underlying level-2 ozone data products and their validation, and the development of new and multi-spectral ozone CDRs through its Climate Change Initiative project on ECV ozone (Ozone_cci). In order to assess the fitness-for-purpose of the datasets procured to the Copernicus CDS, processes have been established both within the Ozone_cci (L2 data) and C3S (L3/4 data) projects to monitor ozone CDR quality, check compliance with GCOS requirements and WMO rolling review of requirements (RRR), and regularly report key performance indicators. The ozone datasets typically undergo a harmonized and comprehensive quality assessment, including: (a) verification of their information content and geographical, vertical and temporal representativeness against specifications; (b) quantification of their bias, noise and decadal drift, and their dependence on major influence quantities; and (c) assessment of the mutual consistency of CDRs from different sounders.
This work summarizes the past development and the operational status of the data production and quality assessment of the ozone CDRs procured to the CDS. These CDRs consist of ozone column and vertical profile datasets at level-3 (monthly gridded) and level-4 (assimilated), from several nadir and limb/occultation satellite sounders, retrieval systems, and merging schemes (see details on C3S Climate Data Store at https://cds.climate.copernicus.eu/). The quality assessment of these climate-oriented ozone data records is based on multi-decade time series of correlative measurements collected from monitoring networks contributing to WMO’s Global Atmosphere Watch, such as GO3OS, NDACC, and SHADOZ. Correlative measurements are quality controlled, harmonized, and compared to the various satellite CDRs using BIRA-IASB’s Multi-TASTE versatile validation system, following the latest state-of-the-art protocols and tools. Comparison results document the current quality of the CDRs, which may exhibit cyclic errors, drifts, and other long-term patterns reflecting, e.g., instrumental degradation, residual biases between different instruments and changes in sampling of atmospheric variability and patterns. The total ozone column CDRs, covering up to four decades, are found to be stable with respect to the reference measurements at the 0.1 % per decade level. Similarly, most nadir and limb profile CDRs achieve a level of stability that is consistent with what is expected from instrument specifications.
Lakes are a critical natural resource of significant interest to the scientific community, local to national governments, industries and the wider public. Lakes support a global heritage of biodiversity and provide key ecosystem services and they are included in the United Nations’ Sustainable Development Goals committed to water resources and the impacts of climate change. Lakes are also key indicators of local and regional watershed changes, making lakes useful for detecting Earth’s response to climate change. Specifically, lake variables are recognised by the Global Climate Observing System (GCOS) as an Essential Climate Variable (ECV) because they contribute critically to the characterization of Earth’s climate. The scientific value of lake research makes it an essential component of the United Nations Framework Convention on Climate Change (UNFCCC) and the Intergovernmental Panel on Climate Change (IPCC).
The Lakes ECV as defined by GCOS-200 include the following thematic variables:
• Lake water level, fundamental to our understanding of the balance between water inputs and water loss.
• Lake water extent, a proxy for change in glacial regions (lake expansion) and drought in many arid environments. Water extent relates to local climate for the cooling effect that water bodies provide.
• Lake surface water temperature, correlated with regional air temperatures and a proxy for mixing regimes, driving biogeochemical cycling and seasonality.
• Lake ice cover freeze-up in autumn and advancing break-up in spring are proxies for gradually changing climate patterns and seasonality.
• Lake water-leaving reflectance, a direct indicator of biogeochemical processes and habitats in the visible part of the water column (e.g., seasonal phytoplankton biomass fluctuations), and an indicator of the frequency of extreme events (peak terrestrial run-off, changing mixing conditions).
• Lake ice thickness, which provides insight into the thermodynamics of lake ice at northern latitudes in response to changes in air temperatures and on-ice snow mass.
Observing and monitoring precisely and accurately the spatial and temporal variability and trends of the lake thematic variables from local to global scale have become critical to understand the role of lakes in weather and climate, but also for a range of scientific disciplines including hydrology, limnology, biogeochemistry and geodesy. Remote sensing provides an opportunity to extend the spatio-temporal scale of lake observations.
The ESA Lakes_cci dataset presented here includes all the Lake ECV variables, except the lake ice thickness which is in development. The dataset consists of daily observations for each thematic variable over the period 1992-2021. The dataset for each of the thematic variable has been derived from multiple instruments onboard multiple satellites with the compatible algorithms and in an effort to ensure homogeneity and stability over time.
All the thematic variables are reported on a common latitude-longitude grid of about 1km resolution for 2024 lakes distributed globally and covering a wide range of hydrological and biogeochemical regimes. For each of the thematic variables, the observations are accompanied by an uncertainty estimate which makes the dataset particularly suitable for climate applications.
An overview of the thematic variable datasets, their validation, the geographical distribution of the lakes and the way to access the data dataset will be presented together with some major global trends observed in the Lakes ECV.
Lake surface water temperature (LSWT), which describes the temperature of the lake at the surface, is a recognised Essential Climate Variable (ECV) of the Global Climate Observing System (GCOS). It is one of the key parameters determining the ecological conditions within a lake, since it influences physical, chemical and biological processes. LSWT also plays a key component in the hydrological cycle, determining both air-water heat and moisture exchanges. As such, monitoring LSWT globally can be extremely valuable in detecting localised climatic extremes, forewarning authorities to the potential impact of such events on lake ecosystems. But also, operational LSWT observations have potential environmental and meteorological applications for inland water management and numerical weather prediction (NWP) through assimilation.
Through the Copernicus Global Land Surface (CGLOPS) project, we have developed and operationalised a global LSWT dataset that provides a thermal characterization of over 1000 of the world’s largest lakes. The operational LSWT product is generated from brightness temperatures observed by the SLSTR instruments onboard Sentinel3A and Sentinel3B. The dataset is currently based on SLSTR Sentinel3A since June 2016 and both SLSTR Sentinel3A and SLSTR Sentinel3B since August 2020.
LSWT is delivered every 10 days, the period covered starting the 1st, 11th and 21st day of each month and providing a 10-day LSWT average along with uncertainty and quality levels. The LSWTs are mapped to a regular grid of about 1 km resolution. The data are routinely available through the CGLOPS data portal with a latency of three days. As part of the routine monitoring of the product, plots showing comparisons of the most recent LSWT against its climatology for each lake are updated together with the spatial distribution of the LSWTs, allowing for easy detection of anomalous events. Another important aspect of the monitoring is the timeliness and the completeness of the SLSTR data at the time of processing. As such, plots showing the completeness of each 10-day product are made available to show users the amount of data that has been used to generate each product. The LSWTs are regularly validated against in situ measurements covering a large portion of the globe. A simple, interactive web-based platform (http://www.laketemp.net/home_CGLOPS/dataNRT/) has been developed to assist with the exploitation of the near-real time information for each lake covered by the CGLOPS LSWT product and reports detailed information on the validation of the product.
The CCI Open Data Portal has been developed as part of the European Space Agency (ESA) Climate Change Initiative (CCI) programme, to provide a central point of access to the wealth of data produced across the CCI programme. It is an open-access portal for data discovery, which supports faceted search and multiple download routes for all the key CCI datasets. The CCI ODP can be accessed at https://climate.esa.int/data.
The CCI Open Data Portal has been in operation since 2015, and since its inception, has provided access to over 450 datasets and has had more than 50 million file accesses. It consists of two front end access routes for data discovery: a CCI dashboard, which shows the breadth of CCI products available and the time ranges which are covered and can be drilled down to select the appropriate datasets; and a faceted search index, which allows users to search for data over a wider range of characteristics. These are supported at the back end by a range of services provided by the Centre for Environmental Data Analysis (CEDA), which includes the data storage and archival, catalogue and search services, and download servers supporting multiple access routes (FTP, HTTP, OPeNDAP, OGC WMS and WCS). Direct access to the discovery metadata is also available, and can be used by downstream tools to build other interfaces on top of these components e.g., the CCI Toolbox uses the search and OPeNDAP access services to include direct access to data.
In the initial phase of the CCI Open Data Portal, a combination of Earth System Grid Federation (ESGF) search and CEDA’s Catalog Service for Web (CSW), were used to provide the functionality of the portal search. However, using the combination of the two services, and the specialised requirements of ESGF, added complexity, and increased the effort needed to publish data, so the portal was redeveloped in 2019 under the CCI Knowledge Exchange project. In this new phase, the Open Data Portal combines search and data cataloguing using OpenSearch with data serving capacity using Nginx and THREDDS, which has simplified the publication process, and allowed more flexibility when including data. A number of innovations have been made to data serving functionality with the adoption of containers and Kubernetes to provide a scalable data service and the provision of an analysis-ready data cache on JASMIN’s object store using Zarr serialisation of source netCDF files. The latter augments the existing data service to provide access to data for the CCI Toolbox application with data rechunked to provide optimal performance for data analysis queries. Publishing has been further streamlined through two changes. First, the servers providing data download and OPeNDAP services (Nginx & THREDDS) are reading directly from the file system so data appears there as it reaches the CEDA archive. Second, through the use of message passing frameworks (RabbitMQ) and containerised processing scripts, we can generate the metadata needed for search in parallel to the files reaching the archive. In some cases, manual changes are needed to this metadata. These are fed in using configuration files and become part of an automated workflow to re-tag the affected data files, leveraging Continuous Integration pipelines.
A key challenge in the operation of the CCI Open Data Portal comes from the heterogeneity of the different datasets that are produced across the Climate Change Initiative programme, with different scientific areas and different user communities all have differing needs in terms of the format and types of data produced. To this end, the work of the CCI Open Data Portal, also includes maintaining CCI data standards. These standards aim to provide a common format for the data, but necessarily, still leaves considerable breadth in the types of data produced. This provides challenges in providing harmonised search and access services, and solutions have been developed to ensure that every dataset can still be fully integrated into our faceted search services.
In this presentation we will describe the CCI Open Data Portal, recent developments, and the lessons that we have learnt from over six years of operations.
1. Introduction
The ESA-CCI High Resolution (HR) Land Cover(LC) project [1] has focused on the study of the spatial resolution in analyzing the role of the land cover in climate modeling. The project has designed a methodology and developed a processing chain for the production of high resolution land-cover and land-cover change products (10/30 m spatial resolution) by using both optical multispectral images and SAR data. The HRLC Essential Climate Variable (ECV) is derived over long time series of data in the period 1990-2019 by considering sub-continental and regional areas. Images acquired by ESA Sentinel-2, Landsat 5/7/8 multispectral sensors, and Sentinel-1, Envisat and ERS 1/2 SAR sensors have been processed for the generation of the final products. Given the spatial resolution and the long time period, this resulted in a big data problem characterized by a huge amount of images and a very large volume of data that have been considered and processed.
This contribution presents the primary products generated by the project that consist of: (i) HR land-cover maps at subcontinental scale derived in a given target year, (ii) a long-term record of regional HR land cover maps, and (iii) land-cover change maps.
2. Generated Products
The HR land-cover maps at subcontinental level have been generated using time series of images acquired by Sentinel 1 and 2 in 2019 at a resolution of 10m. The processing has been organized to exploit monthly composite of images that can properly represent the seasonality of the classes. With respect to the previous ESA-CCI Land Cover (LC) project [2], the resolution is improved of more than one order of magnitude (from 300m to 10m). Accordingly, the legend of classes has been re-designed to catch the capability of the most recent sensors in capturing smaller objects (e.g., single trees) and their evolution over time. The legend has been defined over 2 levels, where the second one captures the class seasonality, for a total of 20 classes (see figure 1). The HR land-cover maps at subcontinental level behaves as reference static input to the climate models representing the context at high resolution and high quality given the large quantity of available data.
The long-term record of regional HR land cover maps includes 5 maps generated every 5 years in the period 1990-2015. The spatial resolution is of 30m in the regions of interest for the historical analysis (included in but smaller than the regions covered by the sub-continental one). In this time span, the number of yearly-based images available in archives dramatically reduces. This makes the classification problem more challenging. The processing can rely on few images per-year only (in some areas there is only one image or no images) that are thus organized in seasonal or yearly composites depending on data availability. Accordingly, depending on data availability, a higher level legend consistent with the one of the static map has been considered that does not include the seasonal class information when no seasonal information is available (see figure 1).
Land-cover change information is computed yearly at 30 m spatial resolution and is consistent with historical HR land-cover maps. Change information is provided as presence and absence of change, and for changed samples the year of change is provided together with change probability. The change legend considers the climatic most relevant transitions among the possible ones given the LC legend.
All the products are associated with a measure about their uncertainty. The land-cover products also provide information of the second most probable class identified by the classifier on each pixel. This allows to better capture the complexity of the land-cover RCV in input to climate models.
3. Study Areas
The above-mentioned products have been generated over 3 test areas identified by the Climate User Group as of particular interest to study climate change and the related effects in terms of land-cover and land-cover changes. The areas are in three continents involving climate (tropical, semi-arid, boreal) and complex surface atmosphere interactions that have significant impact not only on the regional climate but also on large-scale climate structures. The three regions are in Amazon basin, the Sahel band in Africa and in the northern high latitudes of Siberia as detailed below (see figure 2).
Amazon. This region has been selected due to large deforestation rates, fire drought and agricultural expansion. Those phenomena are potentially associated to large-scale climate impacts and agents of disturbance including losses of carbon storage and changes in regional precipitation patterns and river discharge with some signs of a transition to a disturbance-dominated regime. An example of LC maps for Amazon is given in Figure 3.
Africa. This region is associated to Sahel band including West and East Africa, which is a complex climatic region which experiences severe climatic events (droughts and floods) for which the future predictions are very uncertain. In this area HRLC impact can be evaluated on better modeling the position and seasonal dynamics of the monsoons (the West African and the Indian ones) and surface processes; and on the explanation of the role of El Nino in the initiation of dramatic drought events (eastern part of the Sahelian band).
Siberia. The third region is expected to be strongly affected by climate changes (polar amplification). Mapping LC changes can document the displacement of the forest-shrubs-grasslands-transition zone to the north and the impact on the carbon stored in permafrost, which in turn will affect long-term terrestrial carbon balance and ultimately climate change.
The generated products have been systematically validated both qualitatively and quantitatively (in terms of overall, producer and user accuracy), and intercomparison analysis has conducted with other land-cover products. Sample collection for quantitative analysis has conducted by photointerpretation on very high resolution images (higher than the 10/30m resolution of products) and intercomparison relies on other existing maps for the considered study areas. The products and the related validation will be presented at the symposium.
References
[1] L. Bruzzone et al, "CCI Essential Climate Variables: High Resolution Land Cover,” ESA Living Planet Symposium, Milan, Italy, 2019.
[2] P. Defourny et al (2017). Land Cover CCI Product User Guide Version 2.0. [online] Available at: http://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf
List of the other HRLC team members: M. Zanetti (FBK), C. Domingo (CREAF); K. Meshkini (FBK), C. Lamarche (UCLouvain), L. Agrimano (Planetek), G. Bratic (PoliMI), P. Peylin (LSCE), R. San Martin (LSCE), V. Bastrikov (LSCE), P. Pistillo (EGeos), I. Podsiadlo (UniTN), G. Perantoni (UniTN), F. Ronci (eGeos), D. Kolitzus (GeoVille), T. Castin (UCLouvain), L. Maggiolo (UniGE), D. Solarna (UniGE).
During the last decades, several sensors were launched that allowed the study of wildfires from space at a global scale. They provide information on active fires, area burned, and the regeneration of the vegetation after the fire event. One of the key variables to assess the impact of wildland fires on climate, in terms of greenhouse gasses and particulate matter emissions, is to know the area of the vegetation burned during the fires.
To address this need, the ESA CCI Fire Disturbance project (FireCCI) has developed in the last years a suite of burned area (BA) products based on different sensors, creating a database spanning from 1982 to 2020. These products, apart from providing information on burned area, also include ancillary information related to the uncertainty of the detection, the land cover affected (extracted from the Land Cover CCI product), and the observational limitations of the input data. All products supply information in monthly files, and are delivered at two spatial resolutions: pixel (at the original resolution of the surface reflectance input data) and grid (at a coarser resolution and specifically tailored for climate researchers).
The dataset with the longest time series is the FireCCILT11 product, based on AVHRR information obtained from the Land Long-Term Data Record (LTDR) version 5, and spanning from 1982 to 2018 at a global scale (Otón et al. 2021). The pixel product has a spatial resolution of 0.05 degrees (approx. 5 km at the Equator), and provides information on the date of the fire detection, the confidence level of that detection, the burned area in each pixel, and an ancillary layer with the number of observations available for the detection. The grid product, at a resolution of 0.25 degrees, summarizes the data of the pixel product for each grid cell, and includes layers corresponding to the sum of burned area, the standard error, and the fraction of burnable area and observed area in each cell. FireCCILT11 is the global BA product with the longest time-series to date.
Another global product, but with a higher spatial resolution, is the FireCCI51, whose algorithm uses MODIS NIR surface reflectance at 250 m spatial resolution and active fires as input (Lizundia-Loiola et al. 2020). This product has a time series of 20 years (2001 to 2020), and it is the global burned area product with the highest resolution currently available. The pixel product includes layers corresponding to the date of detection, the confidence level and the land cover burned, while the grid product, at 0.25-degree resolution, contains the same information as FireCCILT11, and also includes layers of the amount of burned area for each land cover class.
As part of our effort to extend this burned area information into the future, the FireCCI project has recently developed a new algorithm to detect BA using the SWIR bands of the Sentinel-3 SLSTR sensor, extracted from the Synergy (SYN) products developed by ESA. This product, called FireCCIS310, takes advantage of the improved BA detection capacity of the SWIR bands, which has allowed to detect approx. 20% more burned area than the previous global datasets, and with an increased accuracy. FireCCIS310 is currently available for the year 2019, but will be extended into the future. It supplies the same layers as FireCCI51, but at a spatial resolution of 300 m for the pixel product.
Finally, a specific dataset has been created for sub-Saharan Africa, where more than 70% of the total global burned area occurs. This product, called Small Fire Dataset (SFD) uses surface reflectance from the Sentinel-2 MSI sensor at 20 m spatial resolution, complemented with active fire information (Roteta et al. 2019). Version 1.1 of this dataset (FireCCISFD11) covers the year 2016 and is based on Sentinel-2 A data. It includes the same pixel and grid layers as the FireCCI51 product. The newer version 2.0 (FireCCISFD20) has been processed for the year 2019, and takes advantage of the additional data provided by Sentinel-2 B, duplicating the input data amount and temporal resolution. The grid version of this product has a spatial resolution of 0.05 degrees, as suggested by the climate researchers. Due to its higher spatial resolution, this product detects 58% more BA than FireCCI51 for 2016, and 82% in 2019. The vast majority of this additional BA is due to the improved detection of small burned patches, not detectable with moderate resolution sensors.
The increase of burned area detection has a direct impact on climate research, as more vegetation burned means more atmospheric emissions. Carbon emissions from FireCCISFD11, for instance, are between 31 and 101% higher than previous estimates for Africa, and represent about 14% of global CO2 emissions from fossil fuels (Ramo et al. 2021). The BA algorithms and products developed by FireCCI are, therefore, contributing to this line of research, providing new and more accurate information to the climate community.
References:
Lizundia-Loiola, J., Otón, G., Ramo, R., Chuvieco, E. (2020) A spatio-temporal active-fire clustering approach for global burned area mapping at 250 m from MODIS data. Remote Sensing of Environment 236, 111493, https://doi.org/10.1016/j.rse.2019.111493
Otón, G., Lizundia-Loiola, J., Pettinari, M.L., Chuvieco, E. (2021) Development of a consistent global long-term burned area product (1982–2018) based on AVHRR-LTDR data. International Journal of Applied Earth Observation and Geoinformation 103, 102473. https://doi.org/10.1016/j.jag.2021.102473
Ramo, R., Roteta, E., Bistinas, I., Wees, D., Bastarrika, A., Chuvieco, E. & van de Werf, G. (2021) African burned area and fire carbon emissions are strongly impacted by small fires undetected by coarse resolution satellite data. PNAS 118 (9) e2011160118, https://doi.org/10.1073/pnas.2011160118
Roteta, E., Bastarrika, A., Padilla, M., Storm, T., Chuvieco, E. (2019) Development of a Sentinel-2 burned area algorithm: Generation of a small fire database for sub-Saharan Africa. Remote Sensing of Environment 222, 1-17, https://doi.org/10.1016/j.rse.2018.12.011
The dynamics of the Sea Ice affects the global Earth system. Changes in polar climate have an impact across the world, affecting lives and livelihoods, and regulating the climate and the weather. CRiceS (Climate relevant interactions and feedback: the key role of sea ice and snow in the polar and global climate system) is a recent European project aiming at understanding the role of ocean-ice/snow-atmosphere interactions in polar and global climate. The main objective of CRiceS is to deliver improved understanding of the physical, chemical, and biogeochemical interactions within the Ocean/Ice/Atmosphere system, new knowledge of polar and global climate, and enhanced ability of society to respond to climate change.
One of the variables that plays a key role for a better understanding of the ocean/Ice/atmosphere dynamics is Sea Surface Salinity (SSS). SSS allows monitoring changes in sea ice by means of the study of their positive anomalies (associated to sea ice formation and evaporation) and negative anomalies (associated with melting and precipitation). The acquisition of in situ salinity measurements in polar regions is very complicated because of the distance and the extreme weather conditions. Therefore, measurements acquired by satellites become the unique way of having a continuous and synoptic monitoring of the sea surface salinity in polar regions.
Acquisitions of L-band satellite SSS in polar regions, and particularly those by ESA SMOS mission, are hampered by the decrease of sensitivity of brightness temperatures to SSS in cold waters. Recently, these difficulties have been overcome in a dedicated project from ESA (Arctic+ Salinity) over the Arctic Ocean, leading to satellite SSS measurements with enough quality to address many scientific studies.
However, in the Southern Ocean, since the salinity variability is not as large as in the Arctic Ocean, current quality of L-band brightness temperatures does not always allow assessing the seasonal and interannual salinity dynamics of the region. For this reason, reducing brightness temperature errors in this region is one of the major requirements to obtain SSS of enough quality to address scientific studies here.
In the framework of the ESA regional initiative called SO-FRESH, new and enhanced algorithms to reduce the brightness temperature errors have been applied for generating a new SMOS SSS regional product in the Southern Ocean. In this work, we will use the enhanced SMOS SSS product generated in this project and we will present a preliminary quality assessment by: i) comparing with in situ measurements; ii) analysing the uncertainty estimations by means of correlated triple collocation analysis; iii) analysing the seasonal behaviour by using harmonic analysis; and iv) assessing its effective spatial resolution with singularity and spectral analysis. Finally, we will show the capability of this product of improving the description of the Ocean Ice and Atmospheric system in numerical models which is one of the main scientific objectives of CRiceS.
The Copernicus Climate Change Service (C3S) is one of the six thematic information services provided by the Copernicus Earth Observation Programme of the European Union (EU). C3S, which is implemented by ECMWF on behalf of the European Commission, provides past, present and future Climate Data Record (CDR) and information on a range of themes, freely accessible through the Climate Data Store (CDS). It benefits from a sustained network of in-situ and satellite-based observations, re-analysis of the Earth climate and modelling scenarios, based on a variety of climate projections.
Within the Land Biosphere component of C3S, satellite-based observations are used to provide the longest possible, consistent and mature products at the global scale for the following Essential Climate Variables (ECVs): Surface albedo, Leaf Area Index (LAI), the fraction of Absorbed Photosynthetic Active Radiation (fAPAR), Land Cover (LC), Fire, Burnt Areas (BA), and Fire Radiative Power (FRP). State-of-the-art algorithms that respond to GCOS requirements are used and the product quality assurance follows the protocols, guidelines and metrics defined to be consistent with the Land Product Validation (LPV) group of the Committee on Earth Observation Satellite (CEOS) for the validation of satellite-derived land products.
To reach this goal, the following approach is proposed.(i) consolidate the CDR and secure continuation of the products by moving towards Copernicus mission Sentinel-3 as primary data source,(ii) make an important steps towards cross-CDR consistency by harmonizing the pre-processing for all CDRs (atmospheric correction and pixel classification) and (iii) apply an extensive quality with other existing datasets to ensure high quality of the delivered data.
The Belgian VITO Remote Sensing institute is leading the consortium of eight European partners providing the C3S service from 2016. In the new phase of the C3S service all ECVS will use Sentinel 3 and the adaptations will be done in consistency with previous products. The surface albedo V3 data set will be extended in time based on Sentinel 3 OLCI/SLSTR dataset. Improving Land Cover and Burnt Area will be achieved by (i) extending the already existing BA and Land Cover CDRs and ICDRs from the respective projects, (ii) adapting them to Sentinel-3 SLSTR and OLCI sensors, (iii) benefitting from the harmonised pre-processing tools (pixel identification and AC LUTs), and (iv) incorporating these into the processing chains for fully operational and agile production lines.
The Burned Areas product created ad-hoc for C3S, based on the algorithm developed by ESA Fire_CCI, but adapted to Sentinel-3A and B will continue to be processed. A major advancement will be achieved by switching from MODIS based active fire maps to active fire from Sentinel 3, once this is available from the Copernicus ground segment, expected in early 2022. The Active Fire and Fire Radiative Power products will be continued in the service using only Sentinel-3 night-time data as input. ESA has announced that the daytime fire products will be available from the Copernicus ground segment from late 2021/early 2022. Once these data are available, an update is planned for C3S Level2/3 products. Availability of daytime active fires and fire radiative power is highly needed, and, for example, our Fire BA products will be enabled to switch from ageing MODIS to in-house Sentinel 3 auxiliary data thanks to this.
The high quality and maturity of the generated ECV datasets make them a reliable indicator of long-term climate predictions and will contribute information to the annual European State of the Climate report. More detailed information about the ongoing activities and results of the Lot 5 C3S project will be shared at the Living Planet Symposium.
The Orbiting Carbon Observatory 3 (OCO-3) was installed on the International Space Station (ISS) Japanese Experiment Module – External Facility (JEM-EF) in May 2019. From that vantage point, it is using the flight spare instrument from OCO-2 to collect observations of reflected sunlight that are analyzed to return additional estimates of the CO2 dry air mole fraction, XCO2, and solar-induced chlorophyll fluorescence (SIF). The ISS JEM-EF is a highly sought-after resource, so missions installed there are planned for a limited lifetime of typically three years. OCO-3 began routine operations in August 2019 and has an operating extension beyond the nominal three years to at least January 2023. Here, we will present the mission status, including instrument performance, key mission events, data collection statistics and highlights of the science findings of the mission to date. To prepare for the end of the mission, the team will develop a final data product, Version 11, and complete the mission documentation. Details of the end of mission plans, how they fit with the OCO-2 mission and how the data collected is advancing monitoring of urban/local emissions will be discussed.
Concentrations of atmospheric methane (CH4), the second most important greenhouse gas, continue to grow. In recent years this growth rate has increased further (2020: +14.7 ppb), the cause of which remains largely unknown. Accurate estimates of CH4 emissions are key to better understand these observed trends and help implement efficient climate change mitigation policies. New methane observations from the TROPOMI instrument provide unprecedented spatiotemporal constraints on these emissions. Here, we present preliminary results from a new inversion system based on the ECMWF Integrated Forecasting System (IFS) ,which assimilates observations within a 24-hour window cycled 4D-variational algorithm. Specificities of this system include the use of a high-resolution transport model (~9km) combined with online data assimilation (i.e., joint optimization of meteorological and atmospheric composition variables) that provides consistent treatment of atmospheric transport errors. The performance of the system is illustrated by comparing posterior atmospheric concentrations with independent observations, as well as by evaluating posterior emission estimates for regional and point source case studies previously analyzed in the literature. The largest national disagreement found between prior (63.1 Tg yr-1) and posterior (59.8 Tg yr-1) CH4 emissions is from China, mainly attributed to the energy sector. Emissions estimated form our global system agree well with previous basin-wide regional studies and point source specific studies. Emission events (leaks/blowouts) >10 t hr-1 were detected, but without accurate prior uncertainty information, were not well quantified. Our results suggest that global anthropogenic CH4 emissions for 2020 were 5.7 Tg yr-1 (+1.6%) higher than for 2019, mainly attributed to the energy and agricultural sectors. Regionally, the largest increases were seen from China (+2.6 Tg yr-1, 4.3%), with smaller increases from India (+0.8 Tg yr-1, 2.2%) and Indonesia (+0.3 Tg yr-1, 2.6%). Plans to further develop the global IFS inversion system and to extend the 4D-Var window-length using a hybrid ensemble-variational method will also be presented.
Methane (CH4) is the second most important greenhouse gas, of which more than 60 % CH4 is released through human activities. Satellite observations of CH4 provide an efficient way to analyze its variations and emissions. The TROPOspheric Monitoring Instrument (TROPOMI) onboard the Sentinel 5 Precursor (S5-P) satellite measures CH4 at a high horizontal resolution of 7 × 7 km2, showing the capability of identifying and quantifying the sources at a local to regional scale. The Middle East is one of the strong CH4 hotspot regions in the world. However, it is difficult to estimate the emissions here because several sources are located near the coast or in places with complex topography, where the satellite observations are often of reduced quality. We use the WMF-DOAS XCH4 v1.5 product, which has good spatial coverage over the ocean and mountains, to better estimate the emissions in the Middle East.
The divergence method of Liu et al., (2021) has been proven to be a fast and efficient way to estimate CH4 emissions from satellite observations. We have improved our method by comparing the fluxes in different directions for better background corrections over areas with complicated topographies. The performance of the updated algorithm was tested by comparing the estimated emissions from a 1-month WRF-CMAQ model simulation with its known emission inventory over the Middle East. The CH4 emissions based on TROPOMI XCH4 are then derived on a 0.25° grid for 2019 and 2020. With the WMF-DOAS product, sources from oil/gas platforms over the Persian Gulf and sources on the west coast of Turkmenistan become clearly visible in the emission maps. Sources in the mountain areas of Iran are also identified by our updated divergence method. The locations of fossil fuel related NOX emissions usual overlap with CH4 emissions as can be seen in the CAMS bottom-up inventory. Therefore, we have compared our CH4 emission inventory with the emissions derived from TROPOMI observed NO2, in order to gain more insight into the source of the emissions, especially concerning the oil/gas industry in the region.
The Copernicus Anthropogenic Carbon Dioxide Monitoring (CO2M) mission is the first operational space-base system aimed at collecting data in support of systems for global monitoring and verification of CO2 emissions. This will require sampling major emission areas (including plumes from point sources and cities) with high coverage and sufficiently high accuracy including regions with enhanced aerosol loadings.
CO2M has been designed to meet these objectives by carrying an imaging spectrometer for CO2 measurements (CO2I) together with a multi-angle polarimeter (MAP) for co-located aerosol information. The underlying assumption is that the MAP instrument can provide a detailed aerosol characterization for the CO2 retrieval and thus allows to reduce critical aerosol-related uncertainties.
Making use of aerosol information from the MAP instrument will require to develop new approaches for the CO2 retrieval. We have developed a sequential approach where first aerosol properties are retrieved from the MAP measurements which are then used as input for the CO2 retrieval from the CO2I observations. This new retrieval brings together the Generalized Retrieval Aerosol and Surface Properties (GRASP) for the MAP retrieval with the University of Leicester (UoL) full-physics retrieval for the CO2 retrieval.
In this presentation, we will give a description of the sequential MAP-CO2 retrieval for CO2M and present a characterisation of the retrieval approach based on global simulations of realistic atmospheric scenarios. The presentation will conclude with an outlook towards further development needs.
Climate information is essential for monitoring the success of our efforts to reduce greenhouse gas emissions that contribute to climate change, as well as for promoting efforts to increase energy efficiency and to transition to a carbon-neutral economy. The WMO Integrated Global Observing System (WIGOS) promotes network integration and partnership outreach, and engages the regional and national actors essential for successful integration of these systems. The WIGOS Vision for 2040 outlines the ground and space-based capabilities that are required in 2040 to deliver the observations required. These data and observations rely on the Global Climate Observing System (GCOS), which maintains the requirements of Essential Climate Variables (ECVs), and support additional observational needs that are required to systematically observe Earth`s changing climate and is such underpin climate research, services and adaptation measure.
The 2021 Extraordinary World Meteorological Congress approved the new WMO Unified Data Policy, along with two other sweeping initiatives – the Global Basic Observing Network (GBON) and the Systematic Observations Financing Facility (SOFF) – to dramatically strengthen the world’s weather and climate services through a systematic increase in much-needed observational data and data products from across the globe. Approval of the Unified Data Policy provides a comprehensive update of the policies guiding the international exchange of weather, climate and related Earth system data between the 193 Member states and territories of WMO. The new policy reaffirms the commitment to the free and unrestricted exchange of data, which has been the bedrock of WMO since it was established more than 70 years ago.
The Global Basic Observing Network (GBON) is a landmark agreement offering a new approach in which the basic surface-based observing network is designed, defined and monitored at the global level. It paves the way for a radical overhaul of the international exchange of observational data, which underpin all weather, climate and water services. This becomes increasingly important also for climate and greenhouse gas monitoring, when the ground-based and space-based components are used in an integrated fashion. Data from programmes like the WMO Global Atmospheric Watch (GAW) and Integrated Global Greenhouse Gas Information System are key for a comprehensive analysis and monitoring of greenhouse gases and climate and will play an increasingly important role supporting satellite observing systems providing ground-truth and much required data for satellite calibration and validation activities. The new WMO Data policy, and GBON, provide the tools and mechanisms to further evolve these systems to meet future needs for a comprehensive climate, greenhouse gas and carbon monitoring system.
This presentation will give an overview of the above elements and how WMO and GCOS support greenhouse gas and climate monitoring activities and facilitates and leverages access to ground-based observations in response to global needs.
In early 1990s, a European consortium led by French and Greek universities and geophysical observatories initiated an institution of long-term observation in the western Gulf of Corinth, Greece, named the Corinth Rift Laboratory (CR, http://crlab.eu). Its principal aim is to better understand the physics of the earthquakes, their impact and the connection to other related phenomena such as tsunamis or landslides.
The Corinth Rift, is one of the narrowest and fastest extending continental regions worldwide. Its western termination was selected as the study area with the criterion of its high seismicity and strain rate. The cities of Patras and Aigio, as well as other towns were destroyed several times since the antiquity by earthquakes and, in some cases, by earthquake-induced tsunamis. The historical earthquake catalogue of the area reports five to ten events of magnitude larger than 6 per century. Episodic seismic sequencies are often. Over the past two decades, a dense array of permanent sensors was established in the CRL, gathering 80+ instruments, the majority of them being acquired in real time.
The CRL is nowadays one of the Near Fault Observatory (NFO) of the European Plate Observing System (EPOS, https://www.epos-eu.org/tcs/near-fault-observatories) and the only one with international governance.
With the development of synthetic aperture radar interferometry (InSAR) and high-resolution optical imagery space missions, remote sensing occupies an increasingly important place in the observatory. Space observations, especially those from InSAR, contain unique, dense and global information that cannot be obtained through field observations. Although low Earth orbit satellites cannot provide continuous real-time observations, the time lag can be sufficiently short for the space products to be useful for monitoring needs.
For the observation of the CRL observatory, the European Space Agency’s Geohazards Exploitation Platform (GEP) gathers, in a well-organized manner, products routinely made by different services, with a double benefit for the observatory: (1) computational resources and algorithms hosted and maintained by the service provider and (2) capability to elaborate solutions with different services for greater confidence and robustness.
An additional advantage is the didactic and user friendly design of the GEP and secondary education teachers. This experiential summer school is tailored to teach in this natural laboratory and in the field the major components and theoretical background of the observations performed in the NFO. Space observations occupy an important role in the school, with the presence of experts from space agencies and the GEP consortium. The participants have the opportunity to analyze the space data directly in the field, in front of the in-situ instruments as well as in front of geological and other objects of interest. The CRL-School is particularly relevant to the activities of ESA’s European Space Education Resource Office (ESERO) network of currently twenty offices in the ESA member states, focusing on strengthening Science, Technology, Engineering, and Mathematics (STEM) and Space Education in primary and secondary education.
Carbon emissions related to fossil fuel tend to come from localised sources, with urban areas in particular contributing more than 70% of global emissions. In the future, the proportion of the world's population living in cities is expected to continue to rise, resulting in a shift towards an even greater contribution towards fossil fuel related emissions originating from urban areas. Cities are also the focal point of many political decisions on mitigation and stabilisation of carbon emissions, often setting more ambitious targets than national governments (e.g. through the C40 group of cities around the world). For example, the Mayor of London has set the ambitious target for London to be a zero carbon city by 2050. If we want to devise robust, well informed climate change mitigation policies, we need a much better understanding of the carbon budget for cities and the nature of the diverse emission sources within them, underpinned by new approaches that allow verification and optimisation of city carbon emissions and their trends. New satellite observations of CO2 from missions such as OCO-3, MicroCarb and CO2M, especially when used in conjunction with ground-based sensor networks, provide a powerful and novel capability for evaluating and eventually improving existing CO2 emission inventories.
In April 2021 we set up a ground-based measurement network comprising three sites, located upwind, downwind and in the centre of London, using portable greenhouse gas (CO2, CH4, CO) column sensors (Bruker EM27/SUN spectrometers) together with UV/VIS MAX-DOAS spectrometers (NO2). The instruments have so far operated continuously over the course of one year, which we have achieved by automating the sensors and housing them inside weatherproof enclosures. The data we have acquired from the network will not only allow us to critically assess the quality of satellite observations over urban environments, but also to derive data-driven emission estimates using a measurement-modelling framework. Here we will show and discuss findings from our first year of greenhouse gas column observations over London.
Limiting global warming to below 2 degrees Celsius as agreed upon in the Paris agreement requires substantial reductions in fossil fuel emissions. The transparency framework for anthropogenic carbon dioxide (CO2) emissions of the Paris Agreement is based on inventory-based national greenhouse gas emission reports, which are complemented by independent estimates derived from atmospheric CO2 measurements combined with inverse modelling. Such a Monitoring and Verification Support (MVS) capacity is planned to be implemented as part of the EU’s Copernicus programme, however, its ability to constrain fossil fuel emissions to a sufficient extent has not yet been assessed. The CO2 Monitoring (CO2M) mission, planned as a constellation of satellites measuring column-integrated atmospheric CO2 concentration (XCO2), is expected to become a key component of an MVS capacity.
Here we provide an assessment of the potential of a Carbon Cycle Fossil Fuel Data Assimilation System using synthetic XCO2 and other observations to constrain national fossil fuel CO2 emissions for an exemplary 1-week period in 2008 at global scale. We find that the system can provide useful weekly estimates of country-scale fossil fuel emissions independent of national inventories. When extrapolated from the weekly to the annual scale, uncertainties in emissions are comparable to uncertainties in inventories, so that estimates from inventories and from the MVS capacity can be used for mutual verification.
We further demonstrate an alternative, synergistic mode of operation, which delivers a best emission estimate through assimilation of the inventory information as an additional data stream. We show the sensitivity of the results to the setup of the CCFFDAS and to various aspects of the data streams that are assimilated, including assessments of surface networks, the number of CO2M satellites flying in constellation, and the assumed uncertainties in the XCO2 measurements. We also assess the impact of additional observational data streams such as radiocarbon in CCFFDAS on constraining fossil fuel emissions.
Anthropogenic emissions of well-mixed greenhouse gases are currently the main drivers of tropospheric warming. Among the well-mixed greenhouse gases methane (CH4) and carbon dioxide (CO2) are the most important contributors. To limit the global warming, emissions of CH4 and CO2 must be reduced, and reduction claims need to be monitored. Additionally, knowledge of especially CH4 emission sources like landfills, oil, gas and coal production has to be expanded. During the last few years several different satellite sensors demonstrated anthropogenic greenhouse gas emissions detection and/or quantification at various spatial scales and spatial resolution, but there is a lack of airborne systems for emission characterization as well as for validation and verification of the new satellite data. In this context, University of Bremen started the development of a new generation of airborne imaging spectrometer systems for accurate mapping of atmospheric greenhouse gas concentrations (CO2, CH4) based on more than ten years of experience with operating the MAMAP airborne system. The first sensor - in a series of three – is the MAMAP2D-Light (M2DL) instrument. M2DL is a relatively light weight (~42 kg) single channel imaging spectrometer covering absorption bands of CO2 and CH4 between ~1575 and ~1700 nm with a spectral resolution of ~1.1 nm. The instrument is designed to fit into the under-wing pod of a motor glider aircraft (Diamond HK36 TTC-ECO) of the Jade University of Applied Sciences in Wilhelmshaven. At a typical flight altitude of ~1500 m the instrument samples 28 ground scenes across the ~600 m wide swath with a single ground sampling size of approximately 20 m across x 3 m along the flight track. Successful test flights were performed in 2021. While designed to detect and quantify CO2 and CH4 emissions from point sources, it additionally serves as precursor and demonstrator for the larger 2-channel imaging spectrometer MAMAP2D, which is currently being built, as well as the planned ESA CO2M airborne demonstrator.
MAMAP2D (M2D) – currently under construction – is a two channel imaging spectrometer covering the O2A band and the absorption bands of CO2 and CH4 between ~1590 and ~1690 nm with a spectral resolution of < 0.4 nm. The instrument is designed to fit into the cabin of different types of aircraft (pressurised and non-pressurised). At a flight altitude of ~1500 m the instrument samples 37 ground scenes across the ~ 670 m wide swath with a single ground sampling size of approximately 18 m across x 7 m along the flight track.
The third sensor in this series will be the CAMAP2D (Carbon And Methane mAPper 2D), which emerged from ESA’s CO2M airborne demonstrator activities. CAMAP2D will be built for ESA by adding a 2 µm channel to MAMAP2D and further modifications of MAMAP2D to reach the ambitious performance goals for CO2 monitoring.
In this presentation we summarise the status and perspective of the new generation of airborne GHG imaging systems. This will include performance estimates, data analysis strategies as well as initial results from M2DL measurement flight targeting the CO2 emission plume of the coal-fired power plant Jänschwalde in Germany in June 2021. Future applications for emission characterization, satellite data validation and airborne data driven science studies in support of satellite data products from S5P, S5 and CO2M as well as from hyperspectral (PRISMA, ENMAP, CHIME) and spatial very high resolution imagery (WV3, Sentinel 2) will be discussed.
The Imaging and Rapid-scanning mass spectrometer (IRM) onboard Swarm-E frequently measures enhanced minor ionospheric species (N+, NO+, N2+, O2+) at auroral latitudes during both storm and quiet times. With their occurrence frequency peaking in the pre-midnight sector, these ions are thought to be the product of both auroral electron impact ionization and thermospheric expansion. These ions have been measured in ion upflows and downflows and could therefore impact the overall vertical transport and coupling processes in the auroral ionosphere. The dissociative recombination of the measured molecular ions likely constitutes a non-negligible source of hot oxygen atoms, affecting the thermospheric mass density and temperature. Furthermore, the different energy dependence of charge exchange with H between these species could impact the dynamics of storm and substorm recovery. We present new Swarm-E ionospheric composition and velocity measurements and discuss their possible implications in the context of upcoming missions.
The Low Frequency Array (LOFAR) is designed to observe the early universe at radio wavelengths. When radio waves from a distance astronomical source traverse the ionosphere, structures in this plasma affect this signal. The high temporal resolution available (~100 ms), the large range of frequencies observed (10-80 MHz & 120-240 MHz) and the large number of receiving stations (currently 52 across Europe) mean that LOFAR can observe the effects of the midlatitude ionosphere in a level of detail never seen before.
On the 14th July 2018 LOFAR stations across the Netherlands observed Cygnus A between 17:00 UT and 18:00 UT. At approximately 17:40 UT a deep fade in the intensity of the received signal was observed, lasting some 15 minutes. Immediately before and after this deep fade rapid variations of signal strength were observed, lasting less than five minutes. This structure was observed by multiple receiving stations across the Netherlands. It evolved in time and in space. It also exhibited frequency dependent behaviour.
The geomagnetic conditions at the time of the observation were quiet, as were the solar conditions. It is suggested that this structure is driven by a source within the Earth system. Observations from lower in the atmosphere are used to identify possible drivers.
The NanoMagSat mission, currently under development in the context of the ESA Scout missions, will consist of a constellation of three nanosatellites combining two 60° inclined and one polar orbits, all at initial altitude of about 570 km. The mission will target investigations of the Earth’s magnetic field and ionospheric environment. Each satellite will carry identical payloads and include a miniaturized High Frequency Magnetometer (HFM) providing three component measurements of the magnetic field at a cadence of 2,000 samples per second. Here, we investigate the possibility of taking advantage of these future measurements, also using modern analysis techniques, to investigate polarization and propagation properties of two important classes of electromagnetic waves.
Equatorial noise is a natural electromagnetic emission generated by instability of ion distributions in the magnetosphere. These waves, which can also interact with energetic electrons in the Van Allen radiation belts, have been shown to propagate radially downward to the low-Earth orbit, thanks to previous measurements from the DEMETER spacecraft. Such waves have been observed at frequencies both below and above the local proton cyclotron frequency as a superposition of spectral lines from different distant sources. Changes in the local ion composition encountered by the waves during their inward propagation cause well identifiable cutoffs in the wave spectra, which can provide valuable information on the ionospheric plasma.
A second class of electromagnetic waves, also worthy of investigations, are nonlinear whistler mode chorus and chorus-like emissions known for their ability to locally accelerate electron in the outer radiation belt to relativistic energies and to cause losses of electrons from the radiation belts by their precipitation in the atmosphere. A divergent propagation pattern of waves at chorus frequencies has previously been reported at subauroral latitudes. The waves propagated with downward directed wave vectors, which were slightly equatorward inclined at lower magnetic latitudes and slightly poleward inclined at higher latitudes. Reverse ray tracing indicated a possible source region near the geomagnetic equator at a radial distance between 5 and 7 Earth radii. Detailed measurements of the Cluster spacecraft have already shown chorus propagating outward from this source region. The time-frequency structure and frequencies of chorus observed by Cluster along the reverse ray paths suggests that low altitude observations could indeed possibly be made by NanoMagSat, which would correspond to a manifestation of natural magnetospheric emissions of whistler mode chorus.
Introduction
HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.
New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.
A series of case studies will assess these products in terms of their scientific impacts.
All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided
Objectives
The scientific objectives of HYDROCOASTAL are to enhance our understanding of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes
The technical objectives are to develop and evaluate new SAR and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.
Presentation
The presentation will describe the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and present results from the evaluation of the initial test data set. It will focus particularly on the performance of the new algorithms over inland water.
Soil Moisture derivation from Satellite Radar Altimetry has been pursued over the past ten years with a view to augmenting the observability in terms of space-time sampling, resolution and dynamic range. The basis of this technique involves crafting DRy EArth Models (DREAMs), which model the response of a completely dry surface to nadir illumination at Ku band. Initially developed over desert and semi-arid terrain, where DREAM hydrological content was primarily restricted to salars and dry river courses, DREAM crafting is now being extended to wetter areas.
This paper addresses the following questions:
1) Under what conditions can radar altimeters measure surface soil moisture? Can DREAMs be crafted over river basins?
2) What hydrology information is encoded in river DREAMs?
3) What can Sentinel-3 tell us about deployment of the new generation of satellite radar altimeters in recovery of soil moisture signals?
4) With the spatial and temporal sampling constraints of current and past altimeters, where are these data valuable?
Data from Sentinel-3A, CryoSat-2, EnviSat, Jason 1/2 and ERS1/2, together with a database of over 86000 graded River and Lake time series, are analysed to investigate the feasibility of DREAM crafting over river basins.
In this paper, results are presented over 15 regions where DREAMs have been constructed. DREAMs are crafted from multi-mission satellite altimeter data and imaging data, informed by ground truth. Current DREAMs have a spatial resolution of 10 arc seconds and a typical dynamic range of order 50dB. They are configured such that a 10dB increase in one pixel corresponds to the change from desiccated to fully saturated surface. Scaling altimeter backscatter for each mission to the DREAMs allows direct estimation of surface soil moisture.
In desert DREAM areas, small seasonal soil moisture signals were successfully retrieved. The first DREAM with significant hydrological content was developed over the Kalahari desert. Altimeter derived soil moisture estimates were generated and compared with external validation data, including the ESA CCI dataset (Dorigo et al., 2017). Good agreement was obtained.
To progress this approach, it was decided to trial the DREAM methodology to craft first generation models over the Congo and Amazon basins.
For these models, the first requirement was to mask off areas of permanent or seasonal inundation. The first test models were created over both targets using as a primary datasource multi-mission Ku band altimetry and other satellite data. Using an augmented version of the method used to identify salars, criteria were established to identify and mask river pixels. A further distinction was made to identify wetland / seasonally inundated regions, and detailed masks were produced for areas to exclude from soil moisture work. Comparing the Congo basin DREAM and its mask with independent data (Dargie et al., 2017) revealed the wealth of surface hydrology information encoded in the beta DREAM model. For the Congo beta test DREAM, 13% of the DREAM pixels are identified as river surfaces and 34% as wetland/seasonally flooded areas. It is noted that many smaller tributaries are below the current spatial resolution of the DREAM, and are classified with their surrounding terrain as wetland pixels. For the Amazon beta test DREAM, the corresponding statistics are 23% rivers and 36% wetlands.
These figures show the proportion of the models masked from soil moisture determination. Over what proportion of this surface are data retrieved by Ku band altimeters? To determine this, the masks were tested with multi-mission altimeter data. A waveform analysis system was utilised to assess echo shapes, scan for complex waveforms and flag echoes from water surfaces. Waveform shapes are classified using a system which identifies fourteen classes of echo shape corresponding to known surface types. The system is tuned for each instrument and observing mode using calibration areas of known characteristics. Multi-mission statistics show highest data retrieval over rivers and wetlands, lower over unmasked DREAM pixels. This is an expected outcome, as excluding rivers and wetlands selects for rougher topography. Varying proportions of waveforms were flagged by the system as returns affected by pools of still water throughout the model areas, with the highest proportions from the Amazon basin.
Backscatter data from all instruments show excellent agreement with the DREAMs, with cross-correlation coefficients with data from dry terrain better than 0.9. Altimeter soil moisture datasets are shown to demonstrate good agreement with external validation data. Small soil moisture signals are successfully recovered from desert regions, where other techniques encounter difficulties.
The ability of nadir-pointing altimeters to penetrate vegetation canopy gives a unique perspective in rainforest areas. Over the Amazon and Congo basins, the DREAM masking process creates detailed maps of river and wetland extents, with over 60% of the Amazon and 50% of the Congo DREAM areas identified as rivers, wetlands and seasonally flooded regions. The clear implication is that, to monitor surface water optimally in these rainforests (within the constraints of satellite orbit and repeat period), satellite altimeters should retrieve data from the majority of the underlying surface. Fortunately, analysis of past altimeter performance shows that this goal was largely achieved for the Congo and Amazon basins, particularly by ERS2 and EnviSat. Waveform analysis is found to be essential to exclude returns affected by pools of water within the altimeter footprint. Surface soil moisture time series can then be derived, and are shown to correlate with adjacent river height time series.
Very limited data acquisition from Sentinel-3A, due to the current OLTC mask, critically constrains the scope of SRAL DREAMing over all DREAMs, but results are consistent both with Cryo-Sat2 SAR and LRM mode data and results from prior missions.
In conclusion, satellite radar altimetry can provide soil surface moisture estimates wherever a DREAM can be crafted. Altimeter soil moisture estimates contribute to the datastore over river basins, providing an independent assessment of soil moisture data from other sources.
Waveform classification and soil moisture retrieval works for SRAL altimeters, with good results from Sentinel-3A, where data are available.
Data are currently being analysed to craft DREAMs over further river systems.
References
Dorigo, W.A., Wagner, W., Albergel, C., Albrecht, F., Balsamo, G., Brocca, L., Chung, D., Ertl, M., Forkel, M., Gruber, A., Haas, E., Hamer, D. P. Hirschi, M., Ikonen, J., De Jeu, R. Kidd, R. Lahoz, W., Liu, Y.Y., Miralles, D., Lecomte, P. (2017). ESA CCI Soil Moisture for improved Earth system understanding: State-of-the art and future directions. In Remote Sensing of Environment, 2017, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2017.07.001.
Dargie, GC; Lewis, SL; Lawson, IT; Mitchard, ET; Page, SE; Bocko, YE; Ifo, SA; (2017) Age, extent and carbon storage of the central Congo Basin peatland complex. Nature , 542 pp. 86-90. 10.1038/nature21048.
As the severity and occurence of flood events tend to intensify worldwide with climate change, the need for high fidelity flood forecasting capability increases. However, this capability remains limited due to a large number of uncertainties in models and observed data. In this regard, the Flood Detection, Alert and rapid Mapping (FloodDAM) project, funded by Space Climate Observatory (SCO) initiatives, was set out to develop pre-operational tools dedicated to enabling quick responses in selected flood-prone areas, as well as improving the resolution, reactivity and predictive capability of existing decision support systems.
Hydraulic numerical models are used in hindcast mode to improve knowledge on flood dynamics, assess flood-related damage and design flood protection infrastructures. They are also used in forecast mode by civil security agencies in charge of decision support systems, for flood monitoring, alert, and management. These numerical models are developed to simulate and predict water surface elevation (WSE) and velocity with lead times ranging from a couple of hours to several days. For instance, Telemac2D (www.opentelemac.org) solves the Shallow Water Equations with an explicit first-order time integration scheme, a finite element scheme and an iterative conjugate gradient method. However, such models remain imperfect because of the uncertainties in their inputs that translate into uncertainties in the model outputs. These uncertainties are related, for instance, to the simplified equations, the numerical solver, the forcing and boundary conditions or to the model parameters resulting from batch calibration, such as friction and boundary conditions.
Data Assimilation (DA) allows to reduce these uncertainties by sequentially combining the numerical model outputs with observations, as they become available, and taking into account their respective uncertainties. These techniques are widely used in geosciences and have proven to be effective in river hydrodynamics and flood forecasting. The Ensemble Kalman Filter (EnKF) is implemented here to reduce uncertainties in upstream time-varying inflow discharge to the river catchment as well as in spatially distributed friction coefficients, with the assimilation of in-situ WSE data at observing stations. The optimality of the EnKF depends on the ensemble size over which covariances are stochastically estimated and on the observing network, especially in terms of its spatial and temporal density. The use of remote-sensing (RS) data allows to overcome the limits due to the lack and decline of in-situ river gauge stations, especially in flood plains. In recent years, Synthetic Aperture Radar (SAR) data has been widely used for operational flood management due to the ability to map flood extents over large areas in near real time, and its all-weather day-and-night image acquisition capabilities. Water bodies and flooded areas typically exhibit low backscatter intensity on SAR images, since most of the radar pulses are, upon arrival at the water surfaces, specularly reflected away. Therefore, these areas can be detected relatively straightforward from SAR images, with exceptions in built-up environments and vegetated areas. In the present work, RS-derived flood extents are obtained by a Random Forest (RF) algorithm applied on Sentinel-1 images. The RF was trained on a database that gathers manually delineated flood maps from Copernicus Emergency Management Service Rapid mapping products from past flood events. It also takes into account the MERIT DEM to improve the flood detection precision and recall.
This work highlights the merits of assimilating RS-derived flood extents along with in-situ data that are usually confined in the river bed, in order to improve the representation of the flood plain dynamics. Here, the RS-derived flood extents are post-processed to express the number of wet and dry pixels over selected regions-of-interest (ROI) in the floodplain. These pixel-count observations are assimilated along with in-situ WSE observations to account for errors in friction and upstream forcing. They provide spatially distributed information of the river and flood plain but with a limited temporal resolution that depends on the satellite overpass times; for instance Sentinel-1 has a revisit frequency of several days (maximum six days) while in-situ observations are available every 5 to 15 minutes for observing stations in the VigiCrue network (https://www.vigicrues.gouv.fr/).
The study area is the Garonne Marmandaise catchment (south-west of France) which extends over a 50-km reach of the river Garonne, between Tonneins and La Réole. The control vector for the EnKF-DA is composed of seven friction coefficient values (six on the main channel and one for the floodplain) and three corrective parameters to the inflow discharge. Results are shown for a flood event that occurred in January-February 2021, with a forecast lead time up to +24 hours. It was shown that the assimilation of both RS and in-situ data outperforms the assimilation of in-situ data only, especially in terms of 2D dynamics in the flood plains. Quantitative performance assessments have been carried out by comparing the simulated and observed water level time-series at in-situ observing stations and by computing 2D metrics computed between the simulated flood extent maps and the SAR-derived maps (i.e. Critical Success Index and F1-score based on the expression of the contingency table). This work paves the way toward a cost-effective and reliable solution for flood forecasting and flood risk assessment over poorly gauged catchments or even ungauged catchments. Once generalized, such developments could potentially lead to hydrology-related disaster risk mitigation in other regions. Future progresses built upon this work will extend to other catchments and the assimilation of other flood observations.
For more than two decades, satellite altimetry has demonstrated the potential to derive water level time series of inland waters. Nowadays, accuracies of water level time series between few centimeters for large lakes and few decimeters for smaller lakes and rivers can be achieved. However, there is still potential for quality improvements when optimizing the processing strategy, for example in view of retracking algorithms, off-nadir effects, or outlier rejection.
In 2015, DGFI-TUM published the first DAHITI approach that is based on an extended outlier rejection and a Kalman filter approach. In this poster, we present an updated DAHITI approach, which considers the following aspects for deriving high-accurate water level time series for small inland waters: First, detailed analysis of the altimeter sub-waveforms is performed in order to detect that part of the radar echo that can be assigned to the water bodies of interest. Additionally, off-nadir reflections are analyzed and taken into account, in order to derive reliable error information of the water level time series. This step is also the first step of the outlier rejection, which is extended by applying other criteria. For example, this additionally contains a detection of ice coverage. In order to achieve long-term consistent and homogenous water level time series, the latest geophysical corrections and models are applied and a multi-mission crossover analysis is performed for all altimeter missions.
We present preliminary results for selected inland waters, which are validated by using in-situ data. The results of the new DAHITI approach show a significant improvement of the accuracy of water level time series and its errors estimation.
Climate change increases the likelihood of catastrophic flood events, resulting in destruction of cropland and infrastructure, thereby threatening food security and exacerbating epidemics. These dangerous impacts highlight the need for rapid monitoring of inundation, which is necessary to estimate the dimensions of the disaster. An accurate satellite-based flood mapping can support the risk management cycle, from near-real-time rescue and response until post-event analysis. Current remote sensing techniques allow cheap, quick, and accurate flood classifications, using freely accessible satellite-data, for instance, from the Copernicus Sentinel satellites. Indeed, the Synthetic Aperture Radar (SAR) sensor on-board Sentinel-1 (S1), is uniquely suited to flood mapping due to its 24-hour weather independent imaging technology, and is widely used globally due to the open data availability. Binary classifications are widely used to extract flood inundation from SAR data, but due to the large discrepancy in prevalence of flood/non-flood classes in an S1 tile, finding adequate appropriate labelled samples to train classifiers is extremely challenging in addition to being time consuming. Furthermore, the process of training data collection is non-trivial due to a variety of uncertainties in SAR data originating from the underlying land-use and incorrect labeling could lead to gross misclassifications. For example, if the training data does not sufficiently represent the flood surface roughness diversity, large inundated tracts could be missed by the classifier. Consequently, training a binary can be expensive, slow, and compromise on accuracy, since precise labels for both classes are required despite only one class of interest.
One-class classifiers address this issue, by using only samples of the class of interest, i.e. the true positives, making them the perfect choice for flood classification. Even though one-class classifiers have outperformed classical binary classifiers for a variety of use-cases, surprisingly they have not been widely used so far in flood mapping literature. Accordingly, this study provides the first assessment of one-class classifiers for flood extent delineation from SAR data.
The study area is the coastal part of Beira, Mozambique, where the Cyclone Idai made landfall on 15th March 2019. Idai was the deadliest cyclone in the Southern hemisphere, affecting over 850.000 people and leading to a Cholera outbreak. S1 SAR data was used to classify the inundated area using Support Vector Machine (SVM) and Random Forest (RF) for the binary classification and one-class SVM (OCSVM) for the one-class classification. The data inputs and training data for both flood classifications were the same. For validation concurrent cloud-free Sentinel-2 (S2) optical-data were used.
Preliminary results suggest that one-class classifiers can perform equivalently or better than standard classifiers for flood detection from SAR images given similar volume of training data. Moreover, one-class classifiers offer the advantage of using limited training data and thus result in lower classifier training as well as processing time, without compromising on detection accuracy. Based on the results obtained in this first benchmarking study, the use of one-class classifiers for flood mapping should be further explored, for a robust performance assessment given different underlying land-uses and geographical regions.
The changes in the runoff and in the alluvial outflow lead to changes in the slope, the depth, meandering, the width of the river bottom and the vegetation. The bed load and the suspended load can change the morphology of the river bed as a result of high runoff. This has a direct impact on the determination of the fairway in navigable rivers. That is why it is of great importance for assisting the maintenance of the navigable rivers to provide with instruments to predict the modifications in the river morphology that will potentially impact the fairway. Achieving this has also effect for understanding of the freshwater cycle, for developing our knowledge of the Earth. To address this problem it is necessary to forecast the sediment deposition amounts and the river runoff and to determine how they will change the river morphology. Predicting sediment deposition potential depends on a variety of meteorological and environmental factors like turbidity, surface reflectance, precipitations, snow cover, soil moisture, vegetation index. Satellite data offer rich variety of datasets, supplying this information. We adopt deep learning to address some specifics of Earth observation data, such as their inconsistency, and generate missing data in the time-series with generative adversarial networks - GANs. And consequently we apply the rendered consistent earth observation data along with in-situ measurements on other deep learning architectures (convolutional neural networks CNNS and LSTMs) to actually generate forecasts for river runoff, water level and sediment deposition by using historic satellite data of the meteorological features listed above, and in-situ measurements for water level, runoff and turbidity. Thus, we employ earth observation data for developing AI based solutions that translates as EO4AI. Further, we report on a series of prediction models and experiments carried out on data from the downstream of the Danube and from Arda that show precision of forecasts with minimal deviation with respect to real measurements. To leverage the applicability of the forecasts on the river morphology in integrated models, we calibrate hydrodynamic models using Telemac, and we demonstrate how the fusion of a complex EO4AI method and geometry mapping produces a solution for a real user need of being aware of upcoming changes in the river fairway of the downstream of the Danube. The satellite data are provided by ADAM via the NoR service of ESA. ADAM provides data access to satellite datasets from different satellites with semantic relevance for the construction of sediment transport and deposition forecast model as discussed above. Finally, we demonstrate a visualization of the forecasted fairway on a GIS component using ESRI ArcGIS server.
Acknowledgement
This work has been carried out within ESA Contract No 4000133836/21/NL/SC
Monitoring the water resources from Space is a rapidly developing area of application for radar altimetry. Recent progress in instrumentation (development of SAR and SARIn altimeter sensors) and radar signal treatment has allowed us for the first time to include medium and small continental water objects in the scope of application of altimetry. In the Republic of Ireland only the lakes with area more than 10 km2 are included in the in situ water level observational network. This represents only 30% of the total lake area. The island dimensions (~84 400 km2) limit development of long fluvial networks. As a result, the width of even the largest river channels does not exceed 250 m and ranges mainly within 20 - 100 m. The potential of radar altimetry for monitoring lakes of 2-4 km2 area and rivers of 80-200 m width has already been demonstrated in prior studies. We explore the capacity of the most recent generation of satellite altimetry missions (Jason-3, CryoSat-2, and Sentinel-3) for monitoring water bodies, water courses and water regime of peatlands on the entire territory of the Republic of Ireland. In the framework of the ESA HYDROCOASTAL Project we 1) investigated the performance of SAR (Sentinel-3) and conventional (Jason-2,-3) altimetry to retrieve water level time series, 2) evaluated the advantage of enhanced (80Hz) Sentinel-3 sampling rate processing provided by the ESA G-POD/SARvatore online and on-demand SAR Altimetry Processing Service and 3) examined the gain from the combination of measurements of repeat-orbit (Sentinel-3) and geodetic-orbit (CryoSat-2) satellite missions. We also investigated an effect of river width and configuration of the fluvial virtual station on the accuracy of the water height retrievals as well as assessed the impacts of the surrounding relief on the performance of satellites to produce high quality altimetric water level time series that may be of value to a broad variety of users.
At high latitudes, ice cover on lakes and rivers is the key factor of local and regional climatic, environmental and socioeconomic systems. It modulates heat and mass exchange with the atmosphere, reshapes riparian ecosystems, and may induce hazardous flooding. In many remote regions of the Arctic, freshwater (river and lake) ice is a crucial actor for socioeconomic resilience of local communities. It provides: 1) a unique infrastructure for the transport of goods and people via winter ice roads; 2) access to fishing and hunting grounds; and 3) supplies drinking water. Each year hundreds of kilometres of roads are built in Canada, Alaska, Russia, Norway, Finland, and Switzerland on lake ice and river ice by regional/local authorities or by local residents. For safe usage of ice roads, a variety of information on ice parameters (initial freeze-up, structure, thickness and growth history, fracturing, metamorphism, etc.) is needed. During the last few years, several European (ESA CCI+ Lakes, ESA LIAM, CNES TOSCA) and Russian (RFBR "Arctic") projects have funded research dedicated to investigation of freshwater ice from space. In this presentation, we provide several examples of the use of satellite observations for the study of lake and river ice parameters and discuss results in context of their potential application for safe ice cover use on Lake Baikal and on the Ob River (Siberia).
On Lake Baikal, intra-thermocline eddies often form prior to ice formation and continue to develop under the ice cover. These eddies weaken and melt the ice. Several areas of frequent eddy appearance are located in sections of the lake where ice roads are used by local people and not monitored operationally. The combination of the different optical, imaging SAR and radar altimetry missions helps to monitor and understand the spatial distribution of eddies and the transformation of ice cover by their presence. On the Ob River, radar altimetry observations were used for retrieving ice phenology dates and ice thickness along a 400-km river reach. The retrievals demonstrated a good potential for the forecasting of the ice road operation in Salekhard City. In situ observations are needed for adequate interpretation of satellite observations in the context of changing ice properties. Radiative transfer modelling can also be helpful and, in the near future, may allow for the estimation of the main freshwater ice parameter of interest - ice thickness. Here, we present the first results of the application of the Snow Microwave Radiative Transfer (SMRT) model for the simulation of radar altimeter backscatter and emissivity of Lake Baikal ice during winter 2018-2019.
Satellite remote sensing is an effective approach to monitor floods over large areas. Ground-based gauges remain a vital instrument for monitoring water levels or streamflow, but they cannot capture the spatial extent of a water body or flood. Numerical models can be an excellent source of such information, but are not readily available in all regions and can be costly to set up. Satellites already orbit and monitor nearly all regions of the globe and can thus provide relevant information where other sources are lacking. However, while earth observation has many advantages, there are also data gaps and challenges, which can be different for each specific sensor.
Flood mapping studies and applications often use imagery from optical, e.g. MODIS, Landsat, Sentinel-2, and/or synthetic aperture radar (SAR) sensors, e.g. ALOS, Sentinel-1. SAR’s cloud penetrating capability is especially important for flood mapping, as clouds are often present over (inland) floods, because these are triggered by rainfall originating from clouds. ESA’s Sentinel-1 constellation has for the first time in history made it possible to provide reliable flood mapping services on a large (even global) scale. The synergistic use of optical imagery can help overcome some of SAR’s known issues regarding flood mapping (such as signals resembling that of water over sandy soils and/or agricultural fallows), as well as help provide more timely flood maps, essential for disaster response and relief efforts. Still, current satellite-derived flood maps are not perfect and under- and overestimations of flood waters are to be expected. This is especially true for areas under thick vegetation canopies, as both optical and (most) SAR sensors cannot penetrate these, and urban areas, where signals can be distorted and data from the freely available satellites mentioned here don’t possess the spatial resolution required to accurately map water between or within urban features.
The HYDrologic Remote sensing Analysis for Floods (HYDRAFloods) tool is already using multiple sensors for the improved capabilities mentioned above, with current research focusing on data fusion of optical and SAR imagery as well as the inclusion of hydrologic information. Hydrology plays an important role in the general water cycle, influences floods and can also be used to constrain or improve satellite-derived flood maps. Low soil moisture values in sandy soils and areas of agricultural fallows can be used to prevent false positives derived from SAR imagery. Hydrologically-relevant topography information can be used in a similar fashion, but also to identify potentially flooded areas that are otherwise obscured from satellite imagery, such as under forest canopies. For this, we link the flood maps to hydrologically connected surface water flow paths.
HYDRAFloods is under active development in the SERVIR-Mekong program, covering a large part of Southeast Asia, by ADPC, SIG, SEI and Deltares, supported by NASA and USAID. It is used operationally by the United Nations World Food Programme (WFP) in Cambodia, being made available in their Platform for Real-time Impact and Situation Monitoring (PRISM), and was field tested during the severe floods that hit the country in October 2020. HYDRAFloods embraces open science and combines relevant algorithms from literature with our own custom developments, which are published in open access journals. It runs on the Google Earth Engine platform to facilitate easy data access and running at scale across the entire South East Asia region. The code itself is hosted on an online repository with open source license, including up-to-date documentation.
HYDRAFloods has been described in general at other conferences, so we will only give a brief overview and instead focus on recent research on including hydrologically relevant information in the processing chain to obtain more accurate flood maps. We hope this can lead to a fruitful discussion on the underlying techniques and assumptions, as well as contribute to a broader discussion on combining data from various sources (e.g. in-situ, models, EO) and its best practices.
The Sentinel-6 mission, launched in November 2020, carries the first radar altimeter operating in open burst with a PRF high enough (~9kHz) to perform the focussing of the whole target observation echoes in a fully coherent way with practically negligible impact from along-track replicas. Furthermore, such a feature allows improvement to the along-track resolution down to the theoretical limit around 0.5 m when processing the data with a Fully-Focussed SAR (FFSAR) algorithm. This resolution increment actually represents a revolutionary step with respect to the ~300 m along-track resolution provided by current operational processors based on Unfocussed SAR algorithms, commonly used in previous radar altimeters with a closed burst chronogram, such as CryoSat-2 and Sentinel-3. In this contribution, we explore new applications over inland water surfaces derived from such new Sentinel-6 FFSAR products. Indeed, the FFSAR Ground Prototype Processor (GPP), developed by isardSAT and based on the backprojection algorithm [1], has been used to process data over different types of inland water targets with the following objectives: (1) validate range measurements with in-situ water height data in case of nadir targets and (2) monitor water extent for off-nadir targets located within certain observation constraints. As a main outcome, we present a methodology to estimate water extent of small targets located on unambiguous across-track targets. We have analysed targets that present strong seasonal variability in terms of area, and validated the method by comparing water extent measurements derived from Sentinel-6 with the ones derived from optical, SAR imagery and in-situ observations. The overall work is part of the VALERIA (Validating Algorithms Levels 1A and 2 in Ebre RIver Area) project developed within the Sentinel-6 Validation Team using data from the satellite commissioning phase.
[1] Egido, Alejandro and Walter H. F. Smith. “Fully Focused SAR Altimetry: Theory and Applications.” IEEE Transactions on Geoscience and Remote Sensing 55 (2017): 392-406.
Global hydrological models simulate water storages and fluxes of the water cycle, which is important for e.g. water management decisions and drought/flood predictions. However, models are plagued by uncertainties due to the model input errors (e.g. climate forcing data), model parameters, and model structure resulting in disagreements with observations. To reduce these uncertainties, models are often calibrated against in-situ streamflow observations or compared against total water storage anomalies (TWSA) derived from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. In recent years, TWSA data are integrated into some models via data assimilation.
In this study, we present our framework for jointly assimilating satellite and in-situ observations into the WaterGAP Global Hydrological Model (WGHM). For the first time, we assimilate three data sets:
(a) GRACE-derived TWSA,
(b) in-situ streamflow observations from gauge stations; this is in preparation for the Surface Water and Ocean Topography (SWOT) satellite, which will be launched in 2022 and is expected to allow the derivation of streamflow observations globally for rivers wider than 50-100m, and
(c) Global SnowPack snow coverage data derived from the Moderate Resolution Imaging Spectroradimeter (MODIS), which is installed on NASA’s Earth Observing System satellites.
GRACE assimilation strongly improves the TWSA simulations within the Mississippi River Basin, e.g. the correlation increases to 91%, with which our results are consistent with previous studies. However, we find in this case that the streamflow simulation deteriorates, for example, correlation reduces from 92% to 61% at the most downstream gauge station. In contrast, jointly assimilating GRACE data and streamflow observations from GRDC gauge stations improves the streamflow observations by up to 33% in terms of e.g. RMSE and correlation while maintaining the good TWSA simulations. We use the snow coverage data first to independently validate the impact of TWSA and streamflow assimilation on the snow simulation, and then, for the first time, assimilate the snow coverage data into the WGHM. We expect that this will not only further enhance the streamflow simulations but also the simulations of single WGHM water storages like the snow storage.
Water volumes available in natural and artificial lakes are of prime interest, either for water management purposes or water cycle understanding. However, less than 1% of global lakes are monitored. To such an end, remote sensing has been a useful tool providing continuous and global information for more than 30 years.
Combining information of water elevation with altimetry along with water surface from optical and SAR images can lead to the relative volume variation through the creation of a height-surface-volume relationship (hypsometric curve). This method is currently limited by the altimetry data coverage which is non global (less than 3% worldwide).
Even if the future wide swath altimeter, SWOT, will provide the first global survey of water bodies, the estimation of bathymetry and corresponding hypsometric curve remains a challenge to estimate water volume. Contextual approach can be considered and even trained to approximate a reservoir bathymetry from a “filled” DEM. We used such a contextual approach to develop an algorithm using deep learning to recreate the reservoir’s bathymetry.
The first step consisted in using Digital Elevation Models (DEM) cropped to the sub-basin provided by the Hydrobasin shapefile database. This led to the creation of an artificial database of DEM patches with their associated pseudo-water basins and associated 20m high reservoir. This approach was applied to relatively “dry” but mountainous or hilly countries like Chile, Turkey, Morocco, among others. A specific attention has been given to avoid planar DEM area from already existing water dams with varying water heights from 5 to 20meters. We also checked for each sub-basin that the created virtual reservoir was realistic, for instance in terms of dam length or related water surface. Thereby, we created around 9000 DEM/water surfaces patches to train a Unet deep learning algorithm. We also used data augmentation, data refining and cross-validation over the simulated reservoirs to get a realistic model. The recreated bathymetry led to an error lower than 10% on volume estimation and still improving at this time.
Further advances could be applied not only to reservoirs, but also to lakes and rivers. This would improve the global water volume estimations, but also discharges, thanks to the ever-improving DEM datasets produced in terms of precision and resolution, like with the future CO3D mission (CNES).
Flooding is one of the most damaging natural hazards, causing economic losses and threats to livelihoods and human health. Predicting flooding patterns in river systems is a challenge in large and remote areas. Climate change has intensified the occurrence of severe flood events, and accurate predictive models are required for flood risk management. However, the available in-situ data in remote areas and ungauged river basins is scarce and often not available in the public domain. The accuracy of hydrodynamic models is limited by the quality of the available observations, which are essential to calibrate unobserved or unobservable model parameters.
Accurate topographic elevation measurements are essential to replicate 1D/2D flood processes. Satellite based DEMs have the advantage of providing large coverage in remote areas. However, high resolution DEMs are not always available and therefore it is necessary to use lower resolution products. Missions such as Shuttle Radar Topography Mission (SRTM) or ALOS-PALSAR offer DEMs down to 1-Arcsec resolution freely available. When used as input to hydraulic models, such DEMs can lead to large errors in simulated water surface elevation and surface water extent, due to the complex topography of floodplains that is normally hard to map precisely. To better integrate this data in the model, a finer resolution product will be needed to map the floodplain topography and river bathymetry. The novel altimetry mission ICEsat-2, operating since 2018, offers a large spatial coverage, with an along track resolution down to 70 cm in its photon cloud product ATL03. This data has shown great potential when mapping river topography, identifying narrow river structures and also when multi channel rivers and braided structures are present. ATL03 can be used as a control point dataset, to correct the biases and refine DEMs.
In this study we use a 1D hydraulic model derived from ICEsat-2 data to characterize the river bed geometry of the main channel. We use ATL03 product to map the topography of the river channel, providing accurate data on river bed geometry. We calibrate depth and manning roughness against ATL13 water surface elevation observations, which is the inland water product from ICEsat-2. The water surface elevation is simulated with an accuracy to decimeter level providing a precise river bathymetry characterization. To study the water surface elevation in the floodplain, we combine the SRTM DEM 1-Arcsec resolution with ATL03 cross-sections, to reduce elevation errors. With the refined DEM, we run Mike Flood 1D/2D inundation module, and validate the simulated inundation areas with flood maps from Global Surface Water Explorer.
The developed workflow is demonstrated for sections of Amur river, which flows in China and Russia. This river is characterized by large floodplains and for having a braided structure, making it a suitable study case to demonstrate our methodology.
The Mediterranean regions are characterized by intense rainfall events strongly affected by violent rainfall events causing floods. The vulnerability to flooding in the Moroccan High Atlas, especially in the Tensift basin, has been increasing over the last decades. Rainfall-runoff models can be very useful for flash flood forecasting. However, event-based models require a reduction of their uncertainties related to the estimation of initial moisture conditions before a flood event. Soil moisture may strongly modulate the magnitude of floods and is thus a critical parameter to be considered in flood modeling.
The aim of this study is to compare daily soil moisture measurements obtained by time domain reflectometry (TDR) at Sidi Rahal station with satellite soil moisture products (European Space Agency Climate Change Initiative, ESA-CCI), in order to estimate the initial soil moisture conditions for each event. A modeling approach based on rainfall-runoff observations of 30 sample flood events from (2011 to 2018), in the Ghdat basin, were extracted and modeled by an event-based rainfall-runoff model (HEC-HMS) which is based on the Soil Conservation Service (SCS-CN), loss model, and a Clark unit hydrograph was developed for simulation and calibration of the 10-minute rainfall runoff.
These data were used in the validation process of the event modeling part and indicate that soil moisture could help to improve the initial conditions of event-based models in small basins to improve the quality of flood forecasting. The rationale is that a better representation of the catchment states leads to a better streamflow estimation. By exploiting the strong physical connection between the soil moisture dynamic and rainfall, this methodology is very satisfactory for reproducing rainfall-runoff events in this small Mediterranean mountainous watershed, since Nash coefficients of validation are ranging from (0.76 to 0.89), the same approach could be implemented in other watersheds in this region. The results of this study indicate that the remote sensing data are theoretically useful for estimating soil moisture conditions in data-sparse watersheds in arid Mediterranean regions.
Keywords: Soil moisture; Floods; Remote sensing; Hydrological modeling, CN method, Mediterranean basin.
In semi-arid regions and especially in Sahel, water bodies such as small reservoirs, small lakes, and ponds are vital resources for people. Most studies on inland waters in Africa focus on large lakes like Lake Chad for example, but the numerous lakes and ponds, which are found near almost every village in Sahel, are poorly known. These small water bodies (SWB) are critical in terms of water resources and important for greenhouse gases and biodiversity. SWB probably increased in numbers and surface recently, due to changes in land surface properties after the big Sahelian drought of the late 20ieth century (Gal et al. 2017, doi 10.5194/hess-21-4591-2017), and to dam building, as for instance in Burkina Faso. For a more detailed assessment of changes in water resources, it is necessary to quantify water volumes variability and hydrological regime of these SWB at the regional scale.
The objectives of this work are to develop methods to monitor water quantity of SWB by combining optical and radar remote sensing. This study is carried out over 3 countries (Niger, Mali and Burkina Faso) and addresses the water regime of 40 water bodies over the 2016-2021 period.
Water surface is derived from Sentinel2 optical data. Algorithms for water detection generally face two issues in this region: i) the high number of vegetated water bodies (floating vegetation, grasses or trees), ii) the extremely high and unusual reflectance of Sahelian waters. It turned out that a threshold on the MNDWI index chosen ad hoc for each lake, implemented in Google Earth Engine, is a fast and efficient method to estimate water areas. Water levels are derived from Sentinel 3 altimetry data processed with the ALTIS software (Frappart et al. 2021, doi 10.3390/rs13112196). Careful extraction is required for water bodies in close proximity, such as the Tanvi reservoirs in Burkina Faso, since multiple signals coming from neighbouring water bodies may mix in the radar data.
Water levels and matching water areas are combined to derive surface-height curves. This allows estimating water levels from any Sentinel2 data, and densifies therefore the water levels time-serie derived from Sentinel3 altimetric data alone. Time-serie of water levels are then used to estimate water levels decrease during the dry season (generally from November to June), which is compared to the evaporation loss from each SWB estimated using Penman's method and ERA5 data.
Given that during the dry season water inflow by precipitation is null, differences between water level and evaporation are due to water losses, or uptakes from anthropogenic activities or exchanges with groundwater or river networks. For the 40 SWB studied, evaporation averages about 7 mm/d during the dry season, whereas water losses vary significantly across different water bodies. Water bodies exposed to important pumping activities exhibit significantly higher water loss rates, than the evaporation rate, thus reaching a minimum water balance value around –12.5 mm/d. Others water bodies display the opposite situation, for example for lakes in the inner Niger Delta where the flood extends into the dry season and water is supplied by groundwater or river network, with water balance around 5.7 mm/d.
The results show the potential of the water balance approach in poorly observed semi-arid regions to better understand hydrological processes, including human management of reservoirs. This is particularly relevant for the forthcoming SWOT mission, which will enable this approach to be applied at the global scale.
Continental and global hydrological models are the primary means to simulate surface/sub-surface water storage, water flux, and surface water inundation variables, which are required for hazard mitigation and policy support plans. However, establishing these large-scale models is challenging since the complicated physical processes that govern large-scale hydrology cannot be fully resolved by the simplified equations in these schemes. Besides, it is well known that the model parameters are insufficient to account for intensification of the water cycle caused by the climate change and anthropogenic modifications. Another issue is that most hydrological and hydraulic models are at best only calibrated against river discharge or similar data, but these calibrated parameters may have limited influence on the estimation of water storage and water volume changes in large-scale basins. In this study, we demonstrate the extent to which Terrestrial Water Storage (TWS; a vertical summation of surface and sub-surface water storage) data from the Gravity Recovery And Climate Experiment (GRACE) and its follow-on mission (GRACE-FO), as well as remotely sensed soil moisture data can improve the estimations of river discharge and water extent as well as water storage during episodic droughts and floods. For this, we present the structure of our in-house ensemble Kalman filter based calibration and data assimilation (C/DA) as well Bayesian model-data merging frameworks to integrate freely available satellite data into the in-house modified W3RA water balance model forced by ERA5 data. The results are demonstrated through simulations of water storage, river discharge, drought characteristics and floods in West Africa and Europe.
The number of active gauges with open-data policy for discharge monitoring along rivers has decreased over the last decades. Therefore, we cannot properly answer crucial questions about the amount of freshwater available on a certain river basin, the spatial and temporal dynamics of freshwater resources, or the distribution of the world’s freshwater resources in the future. The recent breakthroughs in spaceborne geodetic techniques enable us to overcome the lack of comprehensive measurements of freshwater resources and allow us to understand the hydrological water cycle more realistically. Among different techniques for estimating river discharge from space, developing a rating curve between the ground-based discharge and spaceborne river water level or width is the most straightforward one. However, this does not always lead to promising results, since the power law rating curves describe a river section with a regular geometry. Such an assumption may cause a large modeling error. Moreover, rating curves do not deliver a proper estimation of discharge uncertainty as a result of the mismodelling and the coarse assumptions made for the uncertainty of inputs.
Here, we propose a nonparametric model for estimating river discharge and its uncertainty from spaceborne river width measurements. The model employs a stochastic quantile mapping function scheme by, iteratively: 1) generating realizations of river discharge and width time series using Monte Carlo simulation, 2) obtaining a collection of quantile mapping functions by matching all possible permutations of simulated river discharge and width quantile functions, 3) adjusting the measurement uncertainties according to the point cloud scatter. The algorithm’s estimates are improved in each iteration by updating the measurement uncertainties according to the difference between the measured and estimated values.
We validate the proposed algorithm over 14 river reaches along the Niger, Congo, Po and Mississippi rivers. Our results show that the proposed algorithm can mitigate the effect of measurement noise and also possible mismodelling. Moreover, the proposed algorithm delivers a meaningful discharge uncertainty. Evaluating the discharge estimates via the stochastic nonparametric quantile mapping function and the rating curve technique shows that the performance of the proposed algorithm is superior to the rating curve technique especially in challenging cases.
With an along-track resolution of around 300 m, ESA CryoSat-2 (CS2) brought along a whole new range of monitoring possibilities of inland water bodies. The introduction of Synthetic Aperture Radar (SAR) altimetry enabled the study of rivers and lakes that were not visible with conventional Low Resolution Mode (LRM) altimeters. However, the 300 m resolution is still a challenge for the smallest water bodies, for which sometimes none or only a single observation is available.
Over some selected water bodies, the CS2 altimeter operates in SAR Interferometric (SARIn) mode, using both the antennas on board. The phase difference between the two returns can be used to locate the across-track origin of the echo. While, traditionally, retracking methods are used to retrieve a single surface height estimate from waveforms over inland water bodies, in this study, we apply a swath approach where multiple peaks of single SARIn waveforms are retracked and geolocated across track using the phase difference information.
We show that this method can be used to retrieve a large number of valid water level estimates (WLE) for each SARIn waveform, even from water bodies that are not immediately located at the satellite nadir. We investigate the potential of this technique over rivers and lakes by looking at the increase in spatial coverage as well as at the impact on the precision of the measurements when compared with conventional nadir altimetry and in-situ hydrometric data.
Increasing the number of WLE is of great importance especially for small water bodies, where the number of available valid measurements from altimeters is generally very limited. The results presented in this work are additionally relevant for the future Copernicus Polar Ice and Snow Topography Altimeter mission (CRISTAL), which will also fly an interferometric altimeter.
Restricted access to freshwater and crop failure lead to disastrous consequences, for example economic losses, hunger and death. Thus, ensuring food production and sufficient water supply for crop production (or agriculture) is a highly relevant topic for the population all over the world. Soil moisture is the main driver for providing water resources for agriculture and vegetation but in semi-arid or arid regions it is becoming more important to derive water from surface water bodies or stored in groundwater. These surface and subsurface water storages are either monitored with in-situ data, which have a long record history, or are simulated in models, which provide global simulations with a good spatial resolution (~50 km). However, the in-situ data are not spatially explicit and very sparse and, thus, cannot cover each climate regime and the models encounter problems with uncertainty in the forcing data and model assumptions.
In the last decades, the use of remote sensed data has enabled observation of the water from space. GRACE (Gravity Recovery And Climate Experiment) and its successor GRACE-FO were and are so far the only satellite missions that observe the sum of surface and subsurface water with global resolution. However, GRACE(-FO) have a coarse spatial resolution (~300 km) and only sense the vertically aggregated sum, so called total water storage anomalies (TWSA); hence a further separation into the different water compartments is needed. Therefore, we integrate GRACE into a hydrological model via assimilation to improve the model’s realism while spatially downscaling and vertically disaggregating GRACE.
In this study, we assess signatures and subsignals found in models using observation (via assimilation) based storages and vegetation (via remote sensing) measures derived from MODIS (Moderate Resolution Imaging Spectroradiometer). In a case study, we interrogate two main processes (measured at peak times) in South Africa for 2003 to 2016: 1) The precipitation-storage dynamics, i.e. the dynamics of the pathway from precipitation to replenished soil moisture, surface water and groundwater and 2) the storage-vegetation dynamics, i.e. the pathway from the corresponding storage to vegetation growth (by using in Leaf Area Index and Actual Evapotranspiration).
Generally, we found that the amount of water that refills the storages is often overestimate in the modeling and the duration for this process is often shorter compared to the observations. For example, we found in the modeling that in general the annual peak of groundwater lags the annual precipitation peak by 3 months, while the observations identify a 4-month lag. For the storage-vegetation dynamics we also notice an overestimation of the amount of water that contributes to vegetation growth, and an over- or underestimation of the duration for this process strongly depends on the considered storage. Our study concluded that the model did not correctly capture the precipitation-water storages-vegetation dynamics and it would be impossible to conclude that from using only GRACE TWSA data, without data assimilation. In the future, our findings will be highly relevant for modelers to and can be used to improve the model structures.
Agricultural systems are the main consumers of freshwater resources at global scale, using 60 % to 90 % of the total available water. While the growing demand for agricultural products and the resulting intensification of their production will increase the dependency on available freshwater resources, this sector will become even more vulnerable because of the intensifying impacts of climate change. Detailed knowledge about soil moisture, being a key parameter in the agricultural sector, can help to mitigate these effects. Nevertheless, spatial and temporal high resolution surface soil moisture data for regional and local monitoring (down to precision farming level) are still challenging to obtain. By using current as well as future Synthetic Aperture Radar (SAR) satellite missions (e.g. Sentinel-1, ALOS-2, NISAR, ROSE-L), this knowledge gap can be filled. Providing a cloud- and weather independent monitoring of the Earth's surface, SAR observations are suitable for regional and local soil moisture estimations, but with a global extent. While the increasing resolution and total number of SAR recordings will contribute to an improvement of the estimation in general, the computational costs as well as the local memory capacity on the other hand become a limiting factor in processing the vast load of data. Here, on-demand cloud-based processing services are one way to overcome this challenge. This is especially interesting as most of the severely affected regions have limited access to computational resources.
Using both VV and VH polarization for vegetational detrending as well as low pass filtering, we developed an automated workflow for estimating soil moisture using temporal and spatial high-resolution Sentinel-1 timeseries, based on the alpha approximation approach of Balenzano et al. 2011. The workflow is established within the cloud processing platform Google Earth Engine (GEE), providing a fast and applicable way for on-demand computation of soil moisture for individual time periods and areas of interest around the globe. The algorithm was tested and validated over the Rur catchment, located in the federal state of North-Rhine Westphalia in the West of Germany. With an area of 2,354 km², it comprises a great diversity in agricultural cropping structure as well as topologies. A total of 711 individual Sentinel-1A and Sentinel-1B dual-polarized (VV + VH) scenes in Interferometric Wide-Swath Mode (IW) and Ground Range Detected High Resolution (GRDH) format are used for the analysis from January 2018 to December 2020. Using all available orbits (both ascending and descending), a temporal resolution of one to two days could be achieved with a spatial resolution of 200 m. The workflow includes multiple steps: speckle filtering, incidence angle normalization, vegetational detrending and low-pass filtering. The results were validated against eight Cosmic-Ray Neutron Stations (CRNS), which are evenly distributed over the catchment, covering various types of landcover. In total, the method achieves an unbiased RMSE (uRMSE) of 5.84 % with an R² of 0.46. Looking at individual months, the highest correlation can be achieved in the months April and October with R² values range between 0.65 to 0.7, while the lowest correlation is observed in July and January, with R² values ranging between 0.15 and 0.2. Looking at individual landuse, the method achieves the best results for pastures, with an uRMSE of 0.42 and an R² value of 0.63.
Estonia is known for its large riverside areas that are seasonally (in spring) flooded over. However, extremely warm winters in Estonia during the last five years have also caused large floodings during the winter. Changes in inundation extent, depth, and duration can change the phonological patterns, animal migration routes and affect the forest management, resulting in economic losses. Therefore, a need to assess the inter-annual variability of inundation along riverside areas has become interest from both public and private sectors.
At the European scale, two flood-monitoring services are provided: The (1) Copernicus Emergency Management Service provides a free-of-charge mapping service in cases of natural disasters, man-made emergencies, and humanitarian crises throughout the world. This service can be triggered by request in the case of an emergency. The (2) Copernicus Land Monitoring Service provides a pan-European high-resolution product, Water and Wetness. This product shows the occurrence of water and wet surfaces over the 2015-2018 period.
However, these services cannot be used for the inter-annual identification of flooded areas. Therefore, an automatic processing scheme of Sentinel-1 data was set up for the mapping of open-water flood (OWF) and flood under vegetation (FUV). The methodology was applied for water mapping from Sentinel-1 (S1) and a flood extent analysis of the three largest floodplains in Estonia in 2019/2020. The extremely mild winter of 2019/2020 resulted in several large floods at floodplains that were detected from S1 imagery with the maximal OWF extent up to 5000 ha and maximal FUV extent up to 4500 ha. A significant correlation (r2 > 0.6) between OWF extent and closest gauge data was obtained for inland riverbank floodplains. The outcome enabled us to define the critical water level at which water exceeds the shoreline and flooding starts. However, for a coastal river delta floodplain, a lower correlation (r2 < 0.34) with gauge data was obtained and the excess of river coastline could not be related to a certain water level. At inland riverbank floodplains, the extent of FUV was three times larger compared to that of OWF. The correlation between the water level and FUV was < 0.51, indicating that the river water level at these test sites can be used as a proxy for forest floods.
The analysis of the extent and frequency of wintertime floods can form the basis for various economic analyses as well as evaluations of revenue being conducted in the forest industry due to mild winters and evaluations of stress to Northern boreal alluvial meadows. Relating conventional gauge data to S1 time series contributes to the implementation of flood risk assessment and management directives in Estonia
Monitoring water levels can help hydrological modeling, predict hydrological responses to climatic and anthropogenic changes, and ultimately contribute to environmental protection and restoration. However, measuring lake water levels is easier said than done. The conventional ground-based gauges are now scarce due to limited accessibility, high cost, the labor needed for continuous maintenance, and required security and oversight of equipment. Although satellite altimetry is a standard tool for water level change detection in lakes worldwide, the newest sensors still have limitations regarding coarse temporal and spatial resolution and re-tracking errors from the backscattered signal from non-water surfaces. Changes in water levels can also be retrieved from Differential Interferometric Synthetic Aperture Radar (DInSAR) by measuring the phase change between two Satellite Radar images, but these changes are relative in space and difficult to unwrap.
Here, we develop a new methodology to estimate the absolute water level changes in the only 30 small Northern-latitude lakes gauged in Sweden. Sweden has more than 100,000 lakes covering 9% of the country’s surface area. We aim to evaluate the capability of InSAR in estimating absolute water level changes of lakes in latitudes beyond 55 degrees without the need of unwrapping the phase component, as it is usually done for InSAR studies over water surfaces. With the constraint of a very short temporal baseline (6 days) between pair Sentinel-1 SAR images, we deal with the phase jump in interferograms resulting from sudden changes in water level, and instead of unwrapping each interferogram, we accumulate the phase change of successive pair images across nine months in 2019. We chose only pixels inside the lakes’ surface area that exhibit a steady, coherent behavior across all interferograms and identified the pixels where the DInSAR and gauged estimates of water level change show high linear correlation coefficients (R2 > 0.8). We found lakes with many pixels showing a high correlation, suggesting the capability of DInSAR to determine the direction of water level change in these lakes. The highest correlation between the accumulated phase change and the gauged water level was observed in a pixel on Lake Båven, southeast Sweden (R2 > 0.97), and the lowest correlation was observed in a pixel on Lake Lillglän in the west of the country (R2 > 0.26). The pixels with a high correlation between the accumulated phase change and the gauged water level were located along the lake shorelines, surrounded by forest and wetland land covers. Surprisingly, features on these shores can still enable the double bounce of the radar signal necessary for the interferometric technique, allowing the retrieval of water level change. The high correlation in these pixels shows that the accumulated phase change of the Sentinel-1 twin satellites can help detect the trends of water level change in high latitude lakes surrounded by marsh-dominated wetlands and forests or other shoreline features.
The SAR-mode processing in altimetry as it is currently operated in ground segments does not
exploit the full capabilities of the SAR system in terms of spatial resolution. The so-called unfocused
SAR altimeter (UFSAR) processing performs the coherent summation of pulses over a limited number
of successive pulses (64-pulses bursts of a few milliseconds in length) reducing the along-track resolution
down to 300 m only. Since recently [2], the concept of coherent summation has been extended
to the whole illumination time of the surface (typically more than 2 seconds) allowing to increase
the along-track resolution up to the theoretical limit (approximately 0.5 m) and thus improving the
SAR-mode capability for imaging reflective surfaces of small size. The benefits of the fully-focused
coherent processing have already been demonstrated on various surfaces to differentiate targets of
heterogeneous surfaces (like sea-ice, inland-water and coastal) and to achieve the maximum effective
number of looks available from SAR altimetry on homogeneous surfaces (like ocean) [2].
The limitations of the FF-SAR in closed-burst mode has already been reported in [2], creating very
harmful artificial side lobes in the along-track dimension due to lacunary chronogram. It is extremely
challenging to separate real signal from its replicas when they are superimposed, considering that
every reflecting focalization point on ground creates its own replicas. Both Sentinel-3 and Cryosat-2
SAR altimetry missions have been designed with a lacunary chronogram, one exception is for the
quasi-continuous pulse transmission Sentinel-6 inter-leaved mode. Over heterogeneous targets, replicas
interference creates peaks and troughs pattern, with overflow of power outside of the water body
boundaries and destruction of power inside the water body boundaries. This clearly jeopardizes confidence
in data and its use for large water body detection like lead detection a major goal in SAR
altimerty sea-ice application.
At level-2, impact of replicas on the estimated geophysical parameters are not completely understood
yet. Even at crossing point between Sentinel-3 and Sentinel-6, it might be tricky to compare the
results due to not identical footprint and overflight angle, but also altimeters differences apart from
chronogram (like the sampling frequency, deramping/match-filtering and SNR). A new methodology
of comparison has been developed and implemented at CLS taking Sentinel-6 data and emulating the
sparse closed-burst mode chronogram of Sentinel-3 by removing pulses. Thus on same acquisition
points open-burst and closed-burst can be compared each other by isolating only replica effect. More
than 700 hydrological targets (including narrow rivers, larger rivers, lakes and dam) have been already
processed. First results showed as expected global differences in amplitude but more surprisingly a
higher range variability of 1.5cm in closed-burst mode compared to open-burst mode.
Next progress is replica removal, a very important topic if we expect to exploit the full potential
of FF-SAR processing with Sentinel-3 and Cryosat-2 data. We propose a deconvolution technique to
recover the open-burst radargram using optimization method starting from Wiener filtered first guess
[3] and a model that takes into account replicas. A new model of multi-scatterer FF-SAR impulse response
function, based on LRM inland-water model approach in [1], has been developed and validated
over diverse rivers data acquisition. This model supposes water presence a priori knowledge, which
might be relevant for inland-water (by exploiting water surface masks), but turns out to be completely
irrelevant for sea-ice lead targets permanently in movement. To tackle this problem, the replica model
is first optimized to determine the position and specularity of water presence that fit at best real data.
Once the model fixed, deconvolution is validated by comparison of reference Sentinel-6 open-burst
geophysical parameters with deconvoluted degraded Sentinel-6 closed-burst data. Different surfaces
captured by Sentinel-6 will be deconvoluted including rivers, lakes, leads and open-ocean.
Keywords— FFSAR, closed-burst, replica, deconvolution
References
[1] R. Abileah, A. Scozzari, and S. Vignudelli. Envisat RA-2 Individual Echoes: A Unique Dataset for a Better
Understanding of Inland Water Altimetry Potentialities. Remote Sensing, 9(6):605, June 2017.
[2] A. Egido and W. H. F. Smith. Fully Focused SAR Altimetry: Theory and Applications. IEEE Transactions
on Geoscience and Remote Sensing, 55(1):392–406, Jan. 2017.
[3] A. Monti-Guarnieri. Adaptive removal of azimuth ambiguities in SAR images. Geoscience and Remote Sensing,
IEEE Transactions on, 43:625–633, Apr. 2005.
Fluvial and riparian ecosystems have many important ecological, social, and economic functions. Several EO-based tools and products have been developed for their monitoring at a global scale. However, the spatial resolution of these global-level products is often too coarse for monitoring narrow rivers and especially their upper sections. These are the areas where rivers are most dynamic and where frequent and accurate monitoring is particularly pressing. To overcome spatial resolution limitations, we developed a method for river monitoring using fraction maps produced with linear spectral signal unmixing.
We developed and tested the method on the Soča and Sava rivers in Slovenia and the Vjosa river in Albania. We mapped three land cover classes of interest – surface water, vegetation, and gravel. The use of spectral bands in combination with the NDVI, MSAVI2, NDWI, and MNDWI indices produced the best results. We achieved similar accuracies with endmembers selected manually and endmembers selected automatically with the N-FINDR algorithm. The optimal total number of endmembers used for spectral signal mixture analysis was found to be between three and five. A larger number of endmembers led to clustering of spectral signatures and thus redundant information. Tests showed that the inclusion of shade as a separate endmember did not improve fraction map accuracy. Furthermore, we found that endmembers selected manually or automatically on one satellite image can be successfully transferred to analyse another image acquired in a comparable geographic region and at a similar phenophase.
Results of the soft classification were compared to hard classification using the Spectral Angle Mapper with the same endmembers. Fraction maps were more accurate than maps based on hard classification both for Sentinel-2 and Landsat images. Water presence detected on the fraction maps was correlated with in situ measured water level and river discharge with Pearson’s r > 0.6 (p < 0.0001). We examined the ability of fraction maps to detect changes in river morphology. By looking at three different timestamps (13 October 2017, 3 July 2019, 5 September 2020), the results showed that fraction map differencing could distinguish changes in gravel deposition down to 400 m2 in extent. We found that change detection accuracy was best on pixel level when changes amounted to at least 30%. Finally, we tested the possibility of detecting river morphology changes from a time series of land cover presence based on fraction maps. The extents of water and gravel can vary following changes in water level. However, we found that a decrease of gravel bar size within two standard deviations of the mean indicated regular variations while a larger decrease pointed to gravel bar removal.
The developed method can be used for monitoring fluvial and riparian environments in highly heterogeneous areas. The main limitations of the method are associated with cloud obstruction and terrain shadow that are known problems of optical images. An interesting line of future investigations is to test the possible contribution of using SAR data for fluvial morphology monitoring.
Title: Quality Flag and Uncertainties over Inland Waters
Inland waters are essential for environmental, societal and economic services such as transport, drinking water, agriculture or power generation. But inland waters are also one of the most affected resources by climate change and human populations growth.
Altimetry, which has been used since 1992 for oceanography, has also proven to be a useful tool to estimate inland water surfaces such a rivers and lakes, which are considered Essential Climate Variables (ECVs). the heterogeneity of the targets size, surfaces roughness etc… and the surrounding environment near the water targets make the interpretation of the measurements more complex. In addition, the availability of a measurement must be complemented by the confidence that it can be attributed to the estimation of the water surface height and providing the uncertainty associated with this measurement will be useful for assimilations and downstream products.
The aim of this presentation is to describe the use of a waveform classification method, based on neural network algorithms, on level 2 data in order to identify reliable measurements on water body targets. This classification can be used as a metric for data quality and is therefore incorporated in the data processing to define a quality flag in the inland product. The quality flag is being implemented in two ESA projects using data for the reprocessing of several missions data: FDR4ALTwith data from ENVISAT, ERS-2 and ERS-A missions and CryoTempo with data from Cryosat2. Secondly, it aims at the presenting the methodology for estimating the uncertainty on the estimated water level.
References:
Birkett C. M. (1995). Contribution of TOPEX/POSEIDON to the global monitoring of climatically sensitive lakes, Journal of Geophysical Research, 100, C12, 25, 179-25, 204
J.F. Cretaux, W. Jelinski, S. Calmant, A. Kouraev, V. Vuglinski, M. Berge-Nguyen, M.C. Gennero, F. Nino, R.A. Del Rio, A. Cazenave, P. Maisongrande: SOLS: A lake database to monitor in the Near Real Time water level and storage variation from remote sensing data, Advances in Space Research, N47, ELSEVIER, 2011,pp. 1497-1507
Crétaux J-F and C. Birkett, lake studies from satellite altimetry, C R Geoscience, Vol 338, 14-15, 1098-1112, doi: 10.1016/J.cre.2006.08.002, 2006
Poisson, Jean-Christophe & Quartly, Graham & Kurekin, Andrey & Thibaut, Pierre & Hoang, Duc & Nencioli, Francesco. (2018). Development of an ENVISAT Altimetry Processor Providing Sea Level Continuity Between Open Ocean and Arctic Leads. IEEE Transactions on Geoscience and Remote Sensing. PP. 1-21. 10.1109/TGRS.2018.2813061.
N. Longépé et al., "Comparative Evaluation of Sea Ice Lead Detection Based on SAR Imagery and Altimeter Data," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 4050-4061, June 2019, doi: 10.1109/TGRS.2018.2889519.
The seasonal snow cover in mountains is crucial for ecosystems and human activities. Developing methods to map snow depth at high resolution ("< 10 m") is an active field of snow studies as snow depth is a key variable for water ressource and avalanche risk assessment. Most methods rely on close range remote sensing, combining lidar or photogrammetry with an airplane or a drone. However, drone acquisitions are limited to small areas ("< 10 km²") and airborne campaigns are logistically difficult to set up in many mountains of the world. Satellite photogrammetry is an innovative method for monitoring the seasonal snowpack in mountains and could help address the challenge of estimating the distribution of snow in any place of the world. Accurate snow depth maps at high spatial resolution ("~ 3 m") are calculated by differencing digital elevation models with and without snow derived from satellite stereoscopic images.
Here we present a collection of snow depth maps calculated from 50 cm Pléiades stereoscopic images in the central Andes, the Alps, the Pyrenees, the Sierra Nevada (USA) and Svalbard. The comparison with a reference snow-depth map measured with airborne lidar in the Sierra Nevada, provides a robust estimation of the Pléiades snow depth error. At the 3 m pixel scale, the standard error is about 0.7 m. The error decreases to 0.3 m when the snow-depth maps are averaged over areas greater than "10^3 m²". Specific challenge arose in some sites due to the lack of snow free terrain or due to artefacts inherent to satellite images. However, Pléiades snow depth maps are sufficiently accurate to allow the observation of snow redistribution patterns due to wind transport and avalanche, or the precise determination of the snow volume in a "100 km²" catchment. Assimilated in a distributed snowpack model, Pléiades snow depth amps improve the modeled spatial variability of the snow depth and compensate for lacking processes in the model or bias in the meteorological forcings. The available collection of Pléiades snow depth maps provides the opportunity to characterize with a consistent method the snow cover in an unprecedented variety of sites, such as the arctic, alpine mountains and subtropical regions.
Flooding is the most frequent natural hazard on Earth and affects an increasing number of people. Major events are responsible for huge loss of life and substantial destruction of infrastructure. Detailed information about the location, time, or extent of present and historic floods help in improving emergency response or planning of prevention actions. For this purpose, the new Global Flood Monitoring (GFM, https://gfm.portal.geoville.com) service provides satellite-based flood mapping information derived from Sentinel-1 Synthetic Aperture Radar (SAR) data in near-real time (NRT) on a global scale to the user community (Salamon et al, 2021). This service is part of the Copernicus Emergency Management Service (CEMS), and is available in its beta-version through the Global Flood Awareness System (GloFas, https://www.globalfloods.eu/). In order to improve the overall reliability of the flood mapping, three independent Sentinel-1-based algorithms are combined within one ensemble product.
As basis for all activities within the GFM service, a global Sentinel-1 datacube has been created (Wagner et al, 2021). In the initial phase, more than 1.6 million Sentinel-1 scenes from 2015 – 2020 were preprocessed using the new 30m Copernicus DEM for terrain correction. The observations were resampled to a spatial gridding of 20m and are provided in a tiled and stacked image structure based on the Equi7Grid (https://github.com/TUW-GEO/Equi7Grid).This setup allows for an efficient extraction of spatiotemporal subsets. The Sentinel-1 datacube is updated in NRT to enable continuous flood monitoring.
One of the algorithms going into the ensemble product is the algorithm developed by the Technische Universität Wien (TU Wien, https://www.geo.tuwien.ac.at/). The algorithm performs a pixel-wise decision between flooded and non-flooded conditions. The historic Sentinel-1 measurements of the datacube and derived temporal parameters allow to statistically describe the backscatter signature of both states. The water backscatter differs significantly from non-flooded land due to the specular reflection of the impinging radiation and the side-looking geometry of the SAR system. Contrary to water surfaces, the backscatter signals over non-flooded land are much more heterogeneous and most show strong seasonal variations. This seasonality is caused by variable factors within the signal like soil moisture or vegetation conditions. To parametrise the backscatter under non-flooded conditions, and by considering the backscatter’s seasonality, a harmonic regression model was found to be best suited (in particular for NRT operations). The model’s parameters were computed for each pixel of the Sentinel-1 datacube by a least-square estimation which made use of measurements from 2019-2020. Based on the resulting global parameter database and the underlying model, one is able to estimate the non-flooded backscatter for every day of the year. Using Bayes interference, the incoming Sentinel-1 scene is pixel-wise compared to the modelled backscatter signature of flooded and non-flooded conditions, and the more probable condition is then chosen.
When working with SAR data, water-look-alikes like deserts, radar shadows, or tarmacs could be confused easily with inundated areas. Additionally, one is limited to areas where the Sentinel-1 signal is able to reach the ground undisturbed in order to distinguish between flooded and non-flooded situations. Consequently, areas which are densely vegetated or built-up areas need to be excluded as well as areas which permanently feature low backscatter. Therefore, we utilise exclusion layers that are derived from temporal parameters of the Sentinel-1 datacube. By masking the flood mapping results with the exclusion layers, potential uncertainties are avoided and the algorithm’s robustness is increased.
In this contribution, we present the TU Wien Sentinel-1 flood mapping algorithm, which exploits the historic measurements of a dedicated Sentinel-1 2015-2020 datacube, and which is already integrated within the GFM ensemble approach. We evaluate the globally operated algorithm in representative sites of a set of world regions, highlighting its strengths and caveats. Additionally, we focus on the suitability of the Sentinel-1 signal history to exclude areas that show low sensitivity for flood mapping or could potentially be classified wrongly as flooded.
Salamon et al. (2021) The New, Systematic Global Flood Monitoring Product of the Copernicus Emergency Management Service. In 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, pp. 1053-1056.
Wagner, Wolfgang, et al. (2021) A Sentinel-1 Backscatter Datacube for Global Land Monitoring Applications. Remote Sensing 13.22 , 4622.
Earth Observations (EO) have become popular in hydrology because they provide information in locations where direct measurements are either unavailable or prohibitively expensive to make. Recent scientific advances have enabled the assimilation of EO’s into hydrological models to improve the estimation of initial states and fluxes which can further lead to improved forecasting of different variables. When assimilated, the data exert additional controls on the quality of the forecasts; it is hence important to apportion the effects according to model forcings and the assimilated data. Here, we investigate the hydrological response and seasonal predictions over the snow-melt driven Umeälven catchment in northern Sweden. The HYPE hydrological model is driven by two meteorological forcing datasets: (i) a down-scaled GCM product based on the bias-adjusted ECMWF SEAS5 seasonal forecasts, and (ii) historical meteorological data based on the Ensemble Streamflow Prediction (ESP) technique. Six datasets are assimilated consisting of four EO products (fractional snow cover, snow water equivalent, and the actual and potential evapotranspiration) and two in-situ measurements (discharge and reservoir inflow). We finally assess the impacts of the meteorological forcing data and the assimilated data on the quality of streamflow and reservoir inflow seasonal forecasting skill for the period 2001-2015. The results show that all assimilations generally improve the skill but the improvements vary depending on the season and assimilated data. The lead times until when the data assimilation influences the forecast quality are also different for different datasets and seasons; as an example, the impact from assimilating snow water equivalent persists for more than 20 weeks during the spring. We finally show that the assimilated datasets exert more control on the forecasting skill than the meteorological forcing data, highlighting the importance of initial hydrological conditions for this snow-dominated river system.
In the last couple of decades, active remote sensing technologies, such as radar or LiDAR based sensors, became an essential source of information for the monitoring of inland water body levels. This is due to their validated high accuracies [1], and as a way to fill-in for the ever-decreasing water-level gauge stations that is reported worldwide [2,3].
In this study, we are interested in evaluating the accuracy, and correcting, water level estimates from the recently launched Global Ecosystem Dynamics Investigation (GEDI) full waveform (FW) LiDAR sensor on board the International Space Station (ISS). GEDI, which became operational in 2019, is equipped with three 1064 nm lasers with a pulse repetition frequency (PRF) of 242 Hz. One of the lasers’ power is split in two while the remaining two operate at full power. These four lasers are equipped with beam dithering units (BDUs) that rapidly deflect the light by 1.5 mrads in order to produce eight tracks of data. The acquired footprints along the eight tracks are separated by 600 m across track, and 60 m along the track, with a footprint diameter of 25 m.
Since the launch of GEDI, there have been few studies that assessed its accuracy for the estimation of in-land water levels [4–6]. The first study conducted by Fayad et al. [4], used the first two months of GEDI acquisitions (mid-April to mid-June 2019) to assess the accuracy of GEDI altimetry over eight lakes in Switzerland. For these two months, they reported a mean difference between GEDI and in situ gauge water elevations (bias) ranging from -13.8 cm (under estimation) to +9.8 cm (over estimation) with a standard deviation (SD) of the bias ranging from 14.5 to 31.6 cm. The study conducted by Xiang et al. [6] over the five great lakes of north America (Superior, Michigan, Huron, Erie and Ontario) using five months of GEDI acquisitions (April to August 2019) found a bias ranging from -32 cm (under estimation) to 11 cm (over estimation) with a SD that ranged from 15 to 34 cm. Finally in the study of Frappart et al. [5] which assessed the accuracy of GEDI data over ten Swiss lakes using acquisitions spread over seven months (April to October 2019) found a bias that ranged from -15 cm (under estimation) to +21 cm (over estimation) with a SD ranging from 10 cm to 30 cm.
The factors influencing the physical shape of the waveform and therefore the accuracy of LiDAR’s altemetric capabilities can be grouped into three categories: (1) Instrumental factors (e.g. viewing angle, signal over noise ratio), (2) water surface variations factors (e.g. wave heights and period, wave type), and (3) Atmospheric factors (e.g. cloud presence and cloud composition). For example, the viewing angle at acquisition time was demonstrated to increase elevation errors for ICESat-1 GLAS when the viewing angle deviates from nadir due to precision attitude determination [7]. Water specular reflection is also another potential source of errors due to the saturation of the detector [8]. Finally, clouds and their composition are major factors that affect the quality of LiDAR acquisitions [9,10]. Indeed, and while opaque clouds attenuates the LiDAR signal thus the receiver only captures noise, less opaque clouds allow the LiDAR to make a full round trip, but could potentially increase the photon path length due to forward scattering (atmospheric path delay), thus resulting in biases in elevation measurements [11]. Moreover, GEDI’s return signal strength will greatly vary between cloud-free shots and clouded acquisitions [9].
The objective of this study is therefore two folds. First, the performance of GEDI’s altimetric capabilities using filtered (i.e. removal of noisy acquisitions) GEDI waveforms across the five great lakes (Lakes Erie, Huron, Ontario, Michigan, and Superior) was assessed. Next, a random forest regression model was trained in order to estimate the calculated difference between GEDI acquisitions and in situ water level records using the instrumental, water surface variations and atmospheric variables as predictors to this model. The output of this model, which is namely the estimated difference between each GEDI acquisition and its corresponding in situ reference, was subtracted from each GEDI’s acquisition elevation in order to produce corrected elevation estimates.
Results showed that uncorrected GEDI estimated have on average a bias of 0.3 m (ranged between 0.25 and 0.42 m) and a root mean squared error (RMSE) of 0.58 m (ranged between 0.54 and 0.67 m). After the application of our model, the bias was mostly eliminated (ranged between -0.07 and 0.01 m), and the average RMSE decreased to 0.17 m (ranged between 0.14 and 0.21 m).
References
1. Birkett, C.; Reynolds, C.; Beckley, B.; Doorn, B. From Research to Operations: The USDA Global Reservoir and Lake Monitor. In Coastal Altimetry; Vignudelli, S., Kostianoy, A.G., Cipollini, P., Benveniste, J., Eds.; Springer Berlin Heidelberg: Berlin, Heidelberg, 2011; pp. 19–50 ISBN 978-3-642-12795-3.
2. Shiklomanov, A.I.; Lammers, R.B.; Vörösmarty, C.J. Widespread Decline in Hydrological Monitoring Threatens Pan-Arctic Research. Eos Trans. AGU 2002, 83, 13, doi:10.1029/2002EO000007.
3. Hannah, D.M.; Demuth, S.; van Lanen, H.A.J.; Looser, U.; Prudhomme, C.; Rees, G.; Stahl, K.; Tallaksen, L.M. Large-Scale River Flow Archives: Importance, Current Status and Future Needs. Hydrol. Process. 2011, 25, 1191–1200, doi:10.1002/hyp.7794.
4. Fayad, I.; Baghdadi, N.; Bailly, J.S.; Frappart, F.; Zribi, M. Analysis of GEDI Elevation Data Accuracy for Inland Waterbodies Altimetry. Remote Sensing 2020, 12, 2714, doi:10.3390/rs12172714.
5. Frappart, F.; Blarel, F.; Fayad, I.; Bergé-Nguyen, M.; Crétaux, J.-F.; Shu, S.; Schregenberger, J.; Baghdadi, N. Evaluation of the Performances of Radar and Lidar Altimetry Missions for Water Level Retrievals in Mountainous Environment: The Case of the Swiss Lakes. Remote Sensing 2021, 13, 2196, doi:10.3390/rs13112196.
6. Xiang, J.; Li, H.; Zhao, J.; Cai, X.; Li, P. Inland Water Level Measurement from Spaceborne Laser Altimetry: Validation and Comparison of Three Missions over the Great Lakes and Lower Mississippi River. Journal of Hydrology 2021, 597, 126312, doi:10.1016/j.jhydrol.2021.126312.
7. Urban, T.J.; Schutz, B.E.; Neuenschwander, A.L. A Survey of ICESat Coastal Altimetry Applications: Continental Coast, Open Ocean Island, and Inland River. Terrestrial Atmospheric and Oceanic Sciences 2008, 19, 1–19.
8. Lehner, B.; Döll, P. Development and Validation of a Global Database of Lakes, Reservoirs and Wetlands. Journal of hydrology 2004, 296, 1–22.
9. Fayad, I.; Baghdadi, N.; Riedi, J. Quality Assessment of Acquired GEDI Waveforms: Case Study over France, Tunisia and French Guiana. Remote Sensing 2021, 13, 3144, doi:10.3390/rs13163144.
10. Shu, S.; Liu, H.; Frappart, F.; Kang, E.L.; Yang, B.; Xu, M.; Huang, Y.; Wu, B.; Yu, B.; Wang, S.; et al. Improving Satellite Waveform Altimetry Measurements With a Probabilistic Relaxation Algorithm. IEEE Trans. Geosci. Remote Sensing 2021, 59, 4733–4748, doi:10.1109/TGRS.2020.3010184.
11. Yuekui Yang; Marshak, A.; Palm, S.P.; Varnai, T.; Wiscombe, W.J. Cloud Impact on Surface Altimetry From a Spaceborne 532-Nm Micropulse Photon-Counting Lidar: System Modeling for Cloudy and Clear Atmospheres. IEEE Trans. Geosci. Remote Sensing 2011, 49, 4910–4919, doi:10.1109/TGRS.2011.2153860.
This study developed a method to derive field-specific SM information (as opposed to the large-footprint existing products) in near-real time by leveraging synergies of hydrological models and Earth observation (EO) data, both from SAR and optical sensors. The two components are further complemented by EO near-real time information on meteorological fields for drivers of precipitation and evapotranspiration. While the strength of the soil hydrological models consists of a physically based description of the rain infiltration and percolation processes, the satellite-based data permits to derive vegetation canopy properties at the field scale, and to obtain forcing variables such as precipitation and potential evapotranspiration to feed the models.
For several fields in the COSMOS UK soil moisture monitoring network, we retrieved time series of Sentinel-2 NDVI and Sentinel-1 backscattering values. We used the Hydrus-1D modelling tool for the simulation of surface and in-depth SM at the study fields at daily time steps during periods of low vegetation (NDVI < 0.25). The model’s upper boundary conditions were given by the time series of satellite-based estimates of precipitation and evapotranspiration. The lower boundary condition was set as free drainage, assuming that the water table is deeper than the root zone and the soil is well drained.
For C-band SAR and for the Sentinel-1 range of incidence angles, the literature reports an approximate linear relationship between the average VV backscattering coefficient of uniform, bare-soil crop fields and the surface soil moisture. For individual fields during the bare soil stage, it can be assumed that the main cause of Sentinel-1 VV backscattering changes is surface moisture, more rapidly variable than the surface roughness. Then, the fields’ soil hydraulic conductivity and other infiltration descriptors were obtained by optimizing the temporal trends of the modelled surface moisture to the temporal trends observed in the Sentinel-1 VV backscattering during low vegetation periods.
The soil moisture simulated in this way was compared to the moisture measured at the COSMOS fields at different depths. Our method achieved an excellent accuracy, only drifting away from the measured values at the end of the cropping cycle, after harvesting.
This work was carried out in collaboration with Mantle Labs Ltd. And received funding from the UK Research and Innovation SPace Research & Innovation Network for Technology (SPRINT) programme. By deriving valuable soil moisture information at the field level, Mantle Labs intends to offer enhanced drought related insurance products which can be made available to smallholder farmers. This index insurance will protect farmers against crop loss occurring due to extreme weather events.
One of the less understood feedbacks is the role of burrowing animals on the soil hydrology. Burrowing animals were shown to increase soil macroporosity and affect vegetation distribution, both of which have huge impacts on infiltration, preferential flow, surface runoff, water storage and field capacity. However, the specific role of burrowing animals on these variables is to date poorly understood, the presence of burrowing animals has largely not been included in the erosion models. A suitable approach which enables studying their impacts on the catchment scale and compare them across several climate zones is missing but needed to fully understand the feedbacks between the pedo- and biosphere.
To close this research gap, we combined in-situ measurements of soil properties and burrow distribution, high resolution remote sensing data and machine-learning methods with numerical modelling.
For this, we first conducted field surveys on the presence and absence of animal burrows along a predefined track with 8 the hillside. We extracted 160 soil samples along the catena of study hillsides, as well as 316 soil samples from animal burrow areas and control areas without an animal burrow. We analysed them on several physical and chemical properties needed for model parametrisation and as well estimated the difference between samples extracted from burrow area and control area. We studied the daily surface processes at the burrow scale and measured the volume of excavated sediment by the animals and the sediment redistribution processes within the burrow area during rainfall events using laser scanners for a period of 7 months.
Then, we combined the in-situ measured soil properties and the burrow distribution with remote sensing and machine learning and upscaled the soil properties and the presence of animal burrows to each catchment at a resolution of 0.5 m. We conducted a land cover classification to estimate the vegetation cover and combined LiDAR data with the DGM to estimate the vegetation height.
We implemented the upscaled soil properties, burrow locations and vegetation parameters into Morgan-Morgan-Finney model and parametrised one model per catchment. For this, we adjusted the input parameters at the burrow locations according to the measured soil properties, vegetation cover and height, estimated microtopography changes, burrowing behaviour and sediment excavation and redistribution within the burrow area. We validated the model using the in-situ installed sediment traps.
To estimate the impacts of burrowing animals, we ran the model with included and not included animal burrows. We estimated the daily and yearly impacts of the presence of the burrows on soil erosion, infiltration, preferential flow, surface runoff, water storage and field capacity.
We present a parametrised model, which includes the presence of animal burrows in its calculation and the modelled impacts of burrowing animals on soil erosion, infiltration, preferential flow, surface runoff, water storage and field capacity on the catchment scale at a 0.5 m resolution. We compare the short-term and long-term impacts on the soil hydrological properties at the burrow and catchment scale along the climate gradient.
The numerical model achieved an accuracy of R2 = 0.70. The presence of burrows had a positive impact on sediment erosion, infiltration and water storage and negative impact on surface runoff and field capacity. These effects were on the daily and burrow scale most pronounced in the semi-arid and mediterranean climate zone. In the semi-arid climate zone, the burrows heavily affected already sparse vegetation which then affected the surface infiltration and runoff. In the mediterranean climate zone, the burrow size and entrance diameter especially had an impact on the preferential flow. At the catchment and yearly scale, the effects were most pronounced in the humid zone. Although the density of burrows was here low, due to regularly occurred rainfall events the burrowing animals here cumulatively contributed the most to all hydrological processes. In the arid zone, the impact of burrowing animals was detectable during sporadically occurring heavy rains.
Our study thus shows the potential of inclusion of burrowing animals into numerical models as well as the importance to do so, as our results show heavy impacts of the presence of the burrows on hydrological processes in all climate zones at various temporal and spatial scales.
The inland water monitoring is crucial to estimate the volume of flow into the channel and quantify the available water resource to supply the human needs. This has an essential role for both society and environment and due to the numerous issues related to the ground hydro-monitoring networks it represents a political and economic challenge. A valuable alternative to derive the surface water information on a global scale involves satellite earth observations and over the last decades, the satellite altimetry has proven to be a well-established method for providing water level measurements.
In the context of the ESA-funded FDR4ALT (Fundamental Data Records for Altimetry) project, innovative Earth system data records called Fundamental Data Records are used to produce the Inland water Thematic Data Record based on the exploitation of measurements acquired by the altimeter onboard ERS-1, ERS-2 and ENVISAT.
In this work, we present the first results of the project, showing the analysis of the different retrackers (Ice1, Ice3, MLE4, TFMRA, Adaptive) on different water bodies, such as rivers and lakes of different sizes and environments during different periods of the year. A Round Robin analysis is carried out to evaluate the performances of each retracker with the final goal to detect the best retracker able to describe the inland water flow to be implemented at global level. The performances are calculated against external measurements from reliable ground-based measurements and using datasets from other altimetry sources freely available on the web (Theia Hydroweb, Dahiti, HydroSat).
Rivers play an important role in regulating and distributing inland water resources in the processes of the hydrological cycle of the earth, which is an important factor for the steady development of regional economy and climate change. The river width, water level and flow velocity are important parameters to characterize the changes in river discharge. With the rapid development of remote sensing technology and hydrological models, the width, velocity and slope of rivers can be effectively estimated. But the monitoring river water level in high-precision is not effective, especially for small and medium-sized river basins, due to the low spatial and temporal resolution and the lack of measurement precision. We develop a new retracker to process the inland water altimetry waveforms, called AMPDR-PF (Automatic Multiscale-based Peak Detection Retracker using Physically-based model Fitting). We compare the water level estimated by AMPDR-PF with water level from official altimeter products in the River Rhine. We finally use it to estimate river discharge in the Rhine river.
The mix of quantitative and qualitative methods (AMPDR-PF) are considered to retrack the inland water altimetry waveforms for improving the accuracy of the river level at different spatial scales. Point of departure are combining the advantages of the AMPDR method and SAMOSA+ methods. Moreover, the new method allow for sensitivity analysis in different altimeter data such as Sentinel-3A/3B and Sentinel-6, accuracy validation such as the standard deviations of overpasses, RMSEs (the root-mean-squared errors) compared with the tide gauge at different spatial scales. Time-series of Water Surface Elevation (WSE) from multiple virtual stations are built after correcting for the river mean slope. Additionally, time-series of river width and river slope are generated from Sentinel-1 and Sentinel-2 images and DEM data by using Google Earth Engine. Then the river discharge is estimated by the rating curve. Meanwhile, the river discharge is evaluated using standard methods and compared with other products.
The increased availability and accuracy of recent remote sensing data accelerates the development of data products for hydrological modelling. Most hydrological models rely on the accurate representation of the Earth’s terrestrial surface including all waterways from small mountain streams to great lowland rivers in order to compute discharge. In light of this, the creation of the HydroSHEDS-X database, which is currently developed in an international collaborative project between the German Aerospace Center (DLR), McGill University, Confluvio Consulting, and the World Wildlife Fund, represents a new source for global digital hydrographic information. HydroSHEDS-X is the second version of the well-established HydroSHEDS database, which is freely available at https://hydrosheds.org. While the first version was derived from the digital elevation model (DEM) of the Shuttle Radar Topography Mission (SRTM), the foundation of HydroSHEDS-X are the elevation data of the TanDEM-X mission (TerraSAR-X add-on for Digital Elevation Measurement), which was created in partnership between the German Aerospace Center (DLR) and Airbus. HydroSHEDS-X benefits from the higher resolution of the underlying TanDEM-X DEM given its resolution of 0.4 arc-seconds worldwide including regions with latitudes higher than 60° North, which are not covered by the SRTM DEM. Details of this high-resolution DEM are preserved in the HydroSHEDS-X dataset by applying enhanced pre-processing techniques. This pre-processing of the elevation data comprises DEM infills for invalid and unreliable areas, an automatic coastline delineation with manual quality control, the generation of an open water mask, and the reduction that are caused by distortions of vegetation and settlements. The pre-processed DEM is further treated at a resolution of 3 arc-seconds to obtain a hydrologically conditioned DEM. Derived from this hydrologically conditioned version of the DEM, the HydroSHEDS-X core products comprise flow direction and flow accumulation maps as gridded datasets. The core products are complemented with secondary information on river networks, lake shorelines, catchment boundaries, and their hydro-environmental attributes in vector format. Finally, the database is completed with associated products. Available in standardized spatial units and at multiple scales starting from a resolution of 3 arc-seconds, HydroSHEDS-X is fully compatible with its original version and thus provides a consistent and easy-to-use database for hydrological applications from local to global scale. The main release of HydroSHEDS-X is scheduled for 2022 under a free license.
River plumes are well-visible on satellite imagery as sharp fronts (optical and thermal IR), resulting from the different properties of river vs ocean water bodies (e.g. temperature, salinity, sediment concentration). They serve as the link between river and ocean, transporting freshwater, (fine) sediments, nutrients and human waste (Halpern et al., 2008, Dagg et al., 2004; Joordens et al., 2001). Understanding the river plume dynamics will help to understand the transport of these substances. Moreover, it can provide valuable information on the river system. River plumes are formed by river freshwater entering the ocean, creating buoyant bodies of brackish water overlaying saltier sea water. The local buoyancy input of river freshwater interacts with tides, wind and waves. When the freshwater buoyancy is dominant, the system is stratified. This can be detected on sea surface temperature (SST) images during summer as a sharp front, as the top layer of freshwater heathens faster than the surrounding sea water (Pietrzak et al., 2011). Oppositely, when the mixing processes (tides, wind, waves) are dominant, the system will be well-mixed. This results in more gradual changes in temperature and salinity. For tide-dominated systems, freshwater pulses enter the ocean during ebb, resulting in multiple strong fronts on optical images. Wind and tides govern the propagation speed of these fronts, their thickness and direction in which they move (Rijnsburger et al., 2018). Consequently, fronts and their propagation can give information about the dynamics of the system and which processes are dominant. As water properties like salinity, temperature and sediment concentration determine the water colour, we can detect fronts on optical images. In this research, we study the potential relationship between fronts and river plume characteristics. First, we are developing the algorithms to detect fronts on satellite images. We hypothesize that the scale of the river plume can be related to the river discharge. We investigate this relationship using discharge data and the detected fronts as measure for the scale of the river plume. Next, we will investigate methods for coupling fronts detected on satellite images to fronts in a 3D numerical model of a river plume. We hypothesize that model performance can be improved by using the information retrieved from satellite images. Information-sparse river systems can profit in particular, where the satellite images can provide a valuable source of information to improve the modelling and understanding of the river plume dynamics. In this research we make a first step towards this goal, by investigating possibilities to retrieve (quantitative) information from satellite images and methods to couple the information to model output.
References
Dagg, M., R. Benner, S. Lohrenz, and D. Lawrence (2004), Transformation of dissolved
and particulate materials on continental shelves influenced by large rivers: plume
processes, Continental Shelf Research, 24, 833–858
Halpern, B. S., S.Walbridge, K. A. Selkoe, C. V. Kappel, F.Micheli, C. D’Agrosa, J. F. Bruno,
K. S. Casey, C. Ebert, H. E. Fox, R. Fujita, D. Heinemann, H. S. Lenihan, E.M. P.Madin,
M.T. Perry, E. R. Selig, M. Spalding, R. Steneck, and R. Watson (2008), A global map of
human impact on marine ecosystems, Science, 319, 948–953.
Joordens, J.C.A., A. J. Souza, and A. Visser (2001), The influence of tidal straining and
wind on suspended matter and phytoplankton distribution in the Rhine outflow region,
Continental Shelf Research, 21, 301–325
Pietrzak, J. D., de Boer, G. J., & Eleveld, M. A. (2011). Mechanisms controlling the intra-annual mesoscale variability of SST and SPM in the southern North Sea. Continental Shelf Research, 31(6), 594–610. https://doi.org/10.1016/j.csr.2010.12.014
Rijnsburger, S., Flores, R. P., Pietrzak, J. D., Horner-Devine, A. R., & Souza, A. J. (2018). The Influence of Tide and Wind on the Propagation of Fronts in a Shallow River Plume. Journal of Geophysical Research: Oceans, 123(8), 5426–5442. https://doi.org/10.1029/2017JC013422
Rijnsburger, S. (2021). On the dynamics of tidal plume fronts in the Rhine Region of Freshwater Influence. https://doi.org/10.4233/uuid:279260a6-b79e-4334-9040-e130e54b9360
Validation, including the determination of measurement uncertainties, is a key component of a satellite mission. Without adequate validation of the geophysical retrieval methods, processing algorithms and corrections, the computed geophysical parameters derived from satellite measurements cannot be used with confidence and the return on investment for the satellite mission is reduced. In this context, and in anticipation of the operational delivery of dedicated inland water products based on Copernicus Sentinel-3 measurements, the St3TART ESA project (Sentinel-3 Topography mission Assessment through Reference Techniques), is aimed at preparing a roadmap and providing a preliminary proof of concept for the operational provision of Fiducial Reference Measurements (FRM) in support of the validation activities of the Copernicus Sentinel-3 (S3) radar altimeter over land surfaces of interest (inland water bodies, sea ice and land ice).
In the framework of this project, the activities related to hydrology include a review of existing methodologies and associated ground instrumentation for validating and monitoring the performance and stability of the Sentinel-3 altimeter measurement via FRM. Methodologies and procedures are defined considering the errors and uncertainties coming from the point information of in-situ sensors, satellite measurements and the environment of the validation site. Based on these protocols and procedures, a roadmap is prepared in view of the operational provision of FRM to support the validation activities and foster exploitation of the Sentinel-3 SAR altimeter Land data products, over inland waters.
Then field campaign implementation and realization will be performed as a demonstrator based on the procedures and protocols defined and the roadmap.
In this ongoing project, a comprehensive review of altimeter uncertainties over inland water bodies was carried out on the basis of a literature review, leading to the identification of the different sources of error with their associated uncertainty level. A full review of all the sensors that have been used for many years for Cal/Val activities over inland waters has also been performed, combined with an analysis of innovative sensors that can fulfil the needs and potentially be used in the framework of the St3TART project. Cal/Val “super-sites” have been selected as demonstrators of the roadmap for operational FRM provision. We propose here to present the status and first results of these hydrology activities.
The Fully-Focused SAR (FF-SAR) processing, introduced in Egido and Smith (2016) allows obtaining a maximum resolution of 0.5 m in the along-track direction. It provides significant benefits for inland water altimetry investigations allowing the successful investigation of very small rivers and canals (Kleinherenbrink, 2020) that are typically harder to be analysed by using unfocused Delay-Doppler SAR (DD-SAR) data (about 300 m resolution in the along-track direction).
In its development, two major limitations were associated with the FF-SAR processing: 1) the presence of evenly spaced high sidelobes in the Point Target Response (PTR) due to the closed-loop burst mode implemented in Sentinel-3 & Cryosat-2 altimeter payloads, used for initial FF-SAR investigations, and 2) the heavy computational burden with respect to the unfocused DD-SAR processing.
The first limitation can be overcome by designing the radar system differently adopting an open-loop transmission scheme as, for instance, the one implemented in the altimeter payload of the Sentinel-6 Michael Freilich mission, launched on 21 November 2020 and operating since 21 June 2021.
The second limitation has been addressed in research works following Egido and Smith (2016) indicating that an improvement in terms of computational burden can be achieved by adopting algorithms in the frequency domain (Guccione et al., 2018).
Being the role of FF-SAR for future inland water altimetry well understood, along with the possibility to see it implemented with reduced drawbacks during the Sentinel-6 Michael Freilich mission, a collaboration has started between the ESA Altimetry Team, already hosting the successful SARvatore services portfolio for unfocused SAR & SARin altimetry, and Aresys.
Aresys has developed a generic FF-SAR prototype processor, that is able to process data acquisition from different instruments and exploiting the frequency-domain Omega-K algorithm (Guccione et al., 2018 & Scagliola et al., 2018). The Aresys's FFSAR prototype processor for CryoSat-2 allows users to process, on line and on demand, low-level CryoSat FBR products in SAR mode up to FF-SAR Level-1 products with self-customized options. Additionally a wide set of processing parameters is configurable, allowing as an example to select the along-track resolution or to obtain FFSAR multilooked waveforms at the desired posting rate.
The collaboration led to the creation of a new service for the processing of CryoSat-2 data in FF-SAR mode. Users will be able to select the following options: 1) range oversampling factor, 2) bandwidth factor (responsible for the along-track resolution value) and 3) multilook posting rate (1Hz-500Hz). Geophysical corrections and L2 estimates from both a threshold peak retracker and an ALES-like subwaveform retracker are part of the output package. In preliminary open ocean analyses, very good results on SSH noise have been obtained by the ALES-like subwaveform retracker.
In this presentation, the Aresys FF-SAR prototype processor is described and the outcome of some preliminary validation activities, performed by a group of altimetry researchers, is reported.
The service, to be soon extended to allow the processing of Sentinel-3 and Sentinel-6 data, will be made available to the altimetry community in early 2022 as part of the Altimetry Virtual Lab, a community space for simplified services access and knowledge-sharing. It will be hosted on EarthConsole (https://earthconsole.eu), a powerful EO data processing platform now also on the ESA Network of Resources (info at altimetry.info@esa.int).
References
Egido A., Smith W. H. F., “Fully Focused SAR Altimetry: Theory and Applications” IEEE Transactions on Geoscience and Remote Sensing, Volume: 55, Issue: 1 , Jan. 2017, doi: 10.1109/TGRS.2016.2607122.
Kleinherenbrink M., Marc Naeije, Cornelis Slobbe, Alejandro Egido, Walter Smith, The performance of CryoSat-2 fully-focussed SAR for inland water-level estimation, Remote Sensing of Environment, Volume 237, 2020, 111589, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2019.111589.
Guccione P., Scagliola M., and Giudici D., “2D Frequency Domain Fully FocusedSAR Processing for High PRF Radar Altimeters” Remote Sens. 2018, 10, 1943; doi:10.3390/rs10121943.
Scagliola M., Guccione P., “A trade-off analysis of Fully Focused SAR processing algorithms for high PRF altimeters”, 2018 Ocean Surface Topography Science Team (OSTST) Meeting. https://meetings.aviso.altimetry.fr/programs/complete-program.html.
In times of ever decreasing amount of in-situ data for hydrology, satellite altimetry has become key to provide global and continuous datasets of water surface height. Indeed, studying lakes, reservoirs and rivers water level at global scale is of prime importance for the hydrology community to assess the Earth’s global resources of fresh water.
Much progress has been made in the altimeters’ capability to acquire quality measurements over inland waters. In particular, the Open-Loop Tracking Command (OLTC) now represents an essential feature of the tracking function. This tracking mode’s efficiency has been proven on past missions and it is now stated as operational mode for current Sentinel-3 and Sentinel-6 missions. It has benefited from iterative improvements brought to onboard tables contents repeatedly since 2017.
In 2022, new updates will be performed on the onboard OLTC tables of the Sentinel-3A and Sentinel-3B missions, as well as Sentinel-6A and Jason-3 following their successful Tandem Phase.
The number of hydrological targets used to define the tracking command currently reaches an unprecedented number of targets of almost 100,000 for each Sentinel-3 and about 30,000 for Sentinel-6A. We expect to define a similar number of targets in the interleaved orbit of Jason-3, previously flown by Jason-2, although mostly in Closed-Loop Mode.
These major improvements over the last few years have been made possible by the analysis and merging of the most up-to-date digital elevation models (SRTM, MERIT and ALOS/PalSAR) and water bodies databases (HydroLakes, GRaND v1.3, SWBD, GSW, SWORD). In addition, special effort is put into introducing the most recent reservoir databases. This methodology ensures coherency and consistent standards between all nadir altimetry missions and types of hydrological targets.
Finally, additional efforts have been carried out to define a relevant tracking command outside of hydrological areas, in order to keep track of the continental surface and enabling potential other land applications, while optimizing the OLTC onboard memory.
The OLTC function of nadir altimeters constitutes a great asset for building a valuable and continuous record of the water surface height of worldwide lakes, rivers, reservoirs, wetlands and even a few continental glaciers.
This work is essential at institutional and scientific levels, to make the most of current altimeters coverage over land and to prepare for the upcoming calibration and validation of the Surface Water and Ocean Topography (SWOT) mission. In this context, we will present an overview of OLTC achievements and perspectives for future altimetry missions.
In a global warming and climate change context, populations all over the world are impacted by an increasing number of hydrological crisis (flood events, droughts, ...), mainly related to the lack of knowledge and monitoring of the surrounding water bodies. In Europe, flood risk accounts for 46% of the extreme hazards recorded over the last 5 years and current events confirm these figures for France and Europe. Although the main rivers are properly monitored, a wide set of small rivers contributing to flood events are not monitored at all. There is a clear lack of river basins monitoring with regard to the rapid increase of extreme events. In France, 20000 km of regulatory rivers are monitored in real time while 120000 km are required. Moreover, hydrological surveys are currently insured by heterogeneous means from a country to another and even inside a country, from a region to another. It results in a high cost level to deploy a robust, relevant and efficient monitoring of all watercourses at risks. Therefore there is a real need for affordable, flexible and innovative solutions for measuring and monitoring hydrological areas in order to address climate change and flood risk within the water big cycle.
vorteX.io offers an innovative and intelligent service for monitoring hydrological surfaces, using easy-to-install and fixed remote sensing in-situ instruments, based on compact light-weight altimeter inspired from satellite technology: the micro-stations. It provides in real time, with a high accuracy, hydro-meteorological parameters (water surface height, water surface velocity, images & videos) of the observed watercourses. The combination of these in-situ data with satellite measurements is thus optimal for downstream services related to water resources management and assessment of flood/drought risks. Thanks to the development of the innovative micro-station and the onboard processing using artificial intelligence algorithms, the vorteX.io solution will provide an anytime/anywhere real time hydro-meteorological database to prevent communities from flood risks and secure goods anywhere at any time. The solution thus aims to cover the whole Europe through a non-binding turnkey service to ensure the resilience of territories to climate change and guarantee the safety of people and goods.
It worth to mention that the vortex.io solution can also address the needs of in-situ measurements (Fiducial Reference Measurements) for Cal/Val activities on inland water bodies. Indeed, the vorteX.io micro-station is able to automatically wake up and perform measurements at the exact moment of the satellite overflight thanks to satellite ephemerides. With this feature, there is no time delay between in-situ measurements and the satellite overflight. Water heights are provided with respect to the ellipsoid or local geoid. All geophysical corrections required can be applied on the fly. Different hydrological variables are measured (water surface height, the associated uncertainty, water surface speed) and new others are planned to be added in the next future (water surface temperature, turbidity, …). The vorteX.io solution has already been used in various CNES and ESA projects and will be implemented in the ESA St3TART project and will be used for Cal/Val activities of the future SWOT mission on the Garonne river.
Water level time series based on satellite altimetry over rivers is to a large degree limited to solutions at virtual stations, the locations where the ground tracks repeatedly intersect the river. Such a paradigm has prevented the community from exploiting satellites in geodetic orbit like CryoSat-2, SARAL/AltiKa to their full potential. Additionally, we are in a unique situation with an unprecedented number of missions that together give a much more detailed picture in space and time of the water level, than can be achieved at virtual stations.
An alternative to the virtual stations approach is the so-called reach-based method, where the water level is reconstructed based on available data within a river reach. A reach-based approach has the advantage that the water levels are seen in a context, as a river elevation profile can be formed. This makes it possible to detect blunders, which is more challenging at virtual stations without prior knowledge, where only a few observations may be available. Additionally, a reach-based approach is not affected by tracks that intersect the river at a small/large angle, which typically will degrade the result at a virtual station.
However, combing noisy water levels at different locations, times, and acquired from different missions is indeed challenging. Here we present a new reach-based method to reconstruct the river water level time series. We model the observations given in 1D space and time as a Gaussian Markov Random Field. In the model, we account for inter-mission bias, satellite-dependent noise, and use and increasing spline function to represent a time-independent water level along the reach.
Here, we demonstrate the new method for different river reaches from, e.g., the Missouri and Mississippi Rivers, and validate the result against in situ data. We show that the model is able to reconstruct the water level for reaches with different hydrological regimes, e.g., the presence of reservoirs.
The research for this work was partly funded by the ESA Permanent Open Call STREAMRIDE and RIDESAT Projects.
Surface water level and river discharge are key observables of the water cycle and among the most sensible indicators that integrate long-term change within a river basin. As climate change accelerates and intensifies the water cycle, streamflow monitoring allows the understanding of a broad range of science questions focused on hydrology, hydraulics, biogeochemistry, water resources management and flood protection. Streamflow change is a response to anthropogenic, as deforestation, land use change, urbanization, and natural, as climate modes, climate variability, rainfall, processes. Climate and internal drainage mechanisms affect and control, together with river discharge also lake, reservoirs, mountain glacier storage. An enhanced global warming, predicted by coupled models as due to anthropogenically-induced greenhouse warming, is expected to accelerate the current glacier decline. Moreover, precipitation cause fluvial floods, when rivers burst their banks as a result of sustained or intense rainfall, and pluvial floods, when heavy precipitation saturates drainage systems.
Change in storage and release of water is important for watershed management, including the operation of hydroelectric facilities and flood forecasting, and have direct economic effects. We analyse the observability of extremes water levels events (in low/high water and in discharge) and the long-term variability from space data for the Rhine, Elbe River catchments in central Europe. We consider for the river Rhine the extreme in July 2021 in particular.
Over the last decade, the merging of innovative space observations with in-situ data provides a denser and accurate two-dimensional observational field in space and time compared to the previous two decades, and allow to better monitor the impact of water use and characterize climate change. The new generation of space borne altimeters includes Delay Doppler, laser and bistatic SAR altimeter techniques. The central hypothesis is that these new observations outperform conventional altimetry (CA) and in-situ measurements providing (a) surface water levels and discharge of higher accuracy and resolution (both spatial and temporal), (b) new additional parameters (river slope and width) and (c) better sampling for flood event detection and long-term evolution, providing valuable new information to modelling.
In this study, radar and laser satellite altimetry and satellite images provide the space observations, radar altimeter data are processed by the GPOD ESA service. Time-series of Water Surface Elevation (WSE) are built by two methods. In the first method, time-series are built by collecting the observations of one single virtual point (one-VS), while in the second time-series are constructed from observations at multiple VS (multi-VS) after correcting for the river mean slope. The accuracy of the time-series built with both methods is 10 cm and 30 cm for Sentinel-3. The second method applied to CryoSat-2 SARIN data produces less accurate time-series and gives a similar accuracy for unfocused and fully focused (FF-SAR) processing. The impact on the results of the chosen centerline and mean river slope is investigated using the SWORD database and the national agency BfG database. The river discharge is evaluated.
In the long run, the long-term variability of the combined altimetric and in-situ water level and river discharge time-series depends on the changing climate and correlate with temperature and precipitation at basin and regional scales.
This study is part of Collaborative Research Centre funded from the German Research Foundation (DFG): “Regional Climate Change – The Role of Land Use and Water Management”, in sub-project DETECT-B01 “Impact analysis of surface water level and discharge from the new generation altimetry observations” which addresses the two research questions: (1) How can we fully exploit the new missions to derive water level, discharge, and hydrodynamic river processes, and (2) can we separate natural variability from human water use?
Water resource management is critical in many arid environments. The understanding and modelling of hydrological systems shed light on important factors affecting scarce water resources. In this study, a semi distributed hydrological model capable of simulating water balance in large geographical catchments and sub basin was used for runoff estimation in the Okavango Omatako catchment in Namibia. The model was configured for a thirty–one year period from 1985 to 2015 as per the availability of data for the study area. Subsequently, calibration and validation processes followed for the period 1990-2003 (calibration) and 2004-2008 (validation) using the sequential uncertainty fitting 2 (SUFI 2) algorithm. For the evaluation of the simulation of the Okavango Omatako catchment, two methods were used: i. model prediction uncertainty and ii. model performance indicators. Prediction uncertainty was used to quantify the goodness of fit between observed and simulated result of model calibration, which is measured by P factor and R factor. The P-factor achieved 0.77 during calibration and 0.68 for the validation. The value for calibration was adequate while the validation value was around the recommended value of 0.7. On the other hand, the R factor attained 1.31 in the calibration and 1.82 during validation. The calibration result was within the acceptable range while the validation was slightly on the upper side. The following indicators were used to evaluate the model performance through calibration and validation results respectively; Nash Sutcliffe Efficiency (NSE) with 0.82 and 0.80, Coefficient of determination (R^2) with 0.84 and 0.89, Percent bias (PBIAS) achieving -20≤PBIAS≤-1.1 and residual variation (RSR) performing 0.42 and 0.44. All performance indices achieved very good ratings apart from PBIAS validation which rated as satisfactory. It is therefore recommended to use SWAT for semi-arid streamflow simulations as it demonstrated reasonable results in modeling high and low flows.
Using satellite altimetry over poorly gauged basins where in situ data are scarce can be very beneficial for river monitoring, which is becoming more important due to increasing challenges with managing freshwater resources in a world affected by climate change and economic growth. As the resolution of satellite altimeters increases, the potential for their use grows. When CryoSat-2 was launched by ESA in 2010, the 300 m along track resolution of the Synthetic Aperture Radar (SAR) data allowed for the study of rivers much narrower compared to what was possible for missions such as Envisat, where only Low Resolution Mode (LRM) data were available. However, the resolution of SAR altimetry is still not high enough to monitor narrow rivers and rivers in mountainous areas.
In recent years, the Fully Focused SAR (FF-SAR) processing has been used to increase the along-track resolution further, all the way down to half the antenna length (Egido et al., 2017). The FF-SAR processing can be applied to all SAR altimeter missions, i.e. CryoSat-2, Sentinel-3 and Sentinel-6/Jason-CS. It has previously been shown that that FF-SAR processing can be used to obtain water levels for objects of just a few meters in width (Kleinherenbrink et al., 2020).
Satellite altimetry also includes height measurements from lidar measurements. In 2018, NASA launched the ICESat-2 satellite carrying the Advanced Topographic Laser Altimeter System (ATLAS) which uses a green laser to estimate the distance between the satellite and the point of reflection on the ground. ATLAS detects every single photon that finds its way back to the instrument after reflection. The along-track resolution of ICESat-2 is around 0.7 m but depends on the number of detected photons. For highly specular surfaces the resolution is much higher, and in some cases it might be lower.
Here, we compare the respective pros and cons of FF SAR Sentinel-3 and ICESat-2 altimetry over the Yellow River basin in China and other rivers that are challenging for SAR and LRM altimetry.
We present river levels derived from Sentinel-3 data using the processor provided by the SMAP FFSAR CLS/ESA/CNES project and river levels from the ATL03 and ATL13 ICESat-2 products and compare these with available in situ data.
Operational hydrologists in Czechia often need information on the position of the zero isochion (otherwise known as snow line) in order to correctly delineate the snow-covered area in geomorphological regions. This activity is extremely helpful when determining the information on water stored in snow during the winter season, which, in turn, helps the Czech hydrologists properly model the expected runoff or to better quantify the individual components of water balance. So far, the estimation of the spatial distribution of snow cover and snow water equivalent has been performed through spatial interpolation which is supposed not to estimate the positive values below the zero isochion, the altitude of which is calculated once a week using the combination of in-situ data collected by the Czech Hydrometeorological Institute and remotely sensed data coming from the MODIS imagery. The advantage of the MODIS products is the time resolution, while their disadvantage is the resolution in the space domain (i.e. pixel of the edge of 500 m). This disadvantage often plays a role in discriminating between various classes of the landscape and often prevents the recognition of the snow-covered areas in forests. Therefore, the purpose of this contribution is to experimentally employ another satellite product here, which has a finer spatial resolution, that is the COPERNICUS Sentinel-2 data. The R package 'sen2r' was used to download the Sentinel-2 images and also to further process the data to get the Normalized Difference Snow Index, based on which the discrimination between the snow-covered areas and the areas without snow was carried out for the territory of Czechia (or at least its selected regions) and for the winter season defined by the months of November through May. The task is to find out if the Sentinel-2 data can be routinely used instead of the MODIS data when defining the position of the zero isochion in Czechia. The by-product of the analyses might be the substitution of the commercial processing software by the selected open source software.
While multimission satellite altimetry over inland waters has been known and used for more than two decades, monitoring of lakes and reservoirs is far from fully operational. Depending on the spatiotemporal coverage offered by altimetry missions, the concept has its fundamental limitations. However, for medium to large water bodies where altimetry can provide meaningful information, the multimission is still hampered by what is known as inter-satellite bias. Studies have been performed to quantify absolute altimetry biases at calibration sites and relative altimetry biases on a global scale. However, a thorough understanding of the biases between satellites over inland waters has not yet been achieved.
We explore the possibility of resolving the biases between satellites over lakes and reservoirs. Our solution for estimating the biases between overlapping and non-overlapping time series of water levels from different missions and tracks is to rely on the time series of surface area derived from the satellite imagery. The area estimated by the imagery acts as an anchor for the water level variations, making the area-height relationship the basis for estimating the relative biases. We estimate the relative biases by modeling the area-height relationship within a Gauss-Helmert model conditioned on an inequality constraint. For the estimation, we use the expectation maximization algorithm that provides a robust estimate by iteratively adjusting the weights of the observations.
We evaluate our method on a limited number of lakes and reservoirs and validate the results against in situ water level data. Our results show the presence of inter-satellite and also inter-track biases at the decimeter level, which are different from the global bias estimates.
With the increasing demography, inland water is a more and more pressured resource for the population needs as well as a societal risk for local populations. It is also a fundamental element for industry and agriculture, therefore becoming an economic and political stake. The monitoring of inland water level, proxy to freshwater stocks, conditions of navigability on inland waterways, discharge, flood prevention, is thus an important challenge. With the decreasing number of publicly available in situ water level records, the altimetry constellation brings a powerful and complementary alternative.
The Copernicus services provide operational water level timeseries products based on satellite altimetry data and their associated uncertainties over inland waters worldwide. The Copernicus Global Land service delivers near real time timeseries updated daily over both river and lakes, the Copernicus Climate Change service (C3S) focuses on lakes, with data updated twice a year. The number of operational products is in constant augmentation since 2017 thanks to the combined effort of CNES (THEIA Hydroweb and SWOT Aval projects) and Copernicus projects.
Evolutions to successively integrate new missions are regularly performed: the Sentinel-3A and 3B missions allowed to define new targets as well as exploiting the successive upgrades of onboard Open Loop Tracking Commands that ensure the altimeters hooking on the water targets have been performed. This yielded an operational monitoring of more than 11000 virtual stations over rivers and more than 180 lakes worldwide (as of 2021). The services will also integrate Sentinel-6A in 2022 to ensure the continuity of the timeseries which is essential to ensure continuity of the long lakes timeseries under the Topex/Jason ground track. This is of particular importance for the C3S long lake water level timeseries.
This presentation will detail both the processes yielding the definition of new targets and their qualification for operation as well as the regular quality assessment of the produced water level timeseries. The metrics and associated results will be detailed based both on intra satellite comparisons and on Insitu datasets. In particular, the benefits of recent evolutions of the services will be stressed: data precision improvement brought by the SAR mode used onboard S3A&B and continuity over the long term Topex/Jason timeseries, with the Sentinel-6A mission, which is essential for climate purposes. A first insight will be given on the future further improvements of the services products, expected with the ingestion of the new Inland Water products from the Thematic Instrument Processing Facilities (T-IPF) currently under development in ESA Mission Performance Cluster.
Estimates of the spatio-temporal variations of Earth’s gravity field based on the Gravity Recovery and Climate Experiment (GRACE) mission observations have shed a new light into large scale water redistribution at inter-annual, seasonal and sub-seasonal timescales. As an example, it has been shown that for many large drainage basins the empirical relationship between aggregated Terrestrial Water Storage (TWS) and discharge at the outlet reveals an underlying dynamics that is approximately linear and time-invariant (see attached figure for the Amazon basin).
We built on this observation to first put forward lumped-parameter models of the TWS-discharge dynamics using a continuous-time linear state-space representation. The suggested models are calibrated against TWS anomaly derived from GRACE data and discharge records using the prediction-error-method. It is noteworthy that one of the estimated parameters can be interpreted as the total amount of drainable water stored across the basin, a quantity that cannot be observed by GRACE alone. Combined with the equation of water mass balance, these models form a consistent linear representation of the basin-scale rainfall-runoff dynamics. In particular, they allow to derive analytically a basin-scale instantaneous unit hydrograph. We illustrate and discuss in more detail the results in the case of the Amazon basin and sub-basins, which present relatively simple TWS-discharge dynamics well approximated by first-order ordinary differential equations. Finally, we briefly discuss how to refine the linear models by introducing non-linear terms to better capture delays and saturations.
With such linear and non-linear models at hands, it is possible to use classical Bayesian algorithms to filter, smooth or reconstruct the basin aggregated TWS and/or discharge in a consistent manner. As such, we claim that these lumped models can be an alternative to more complex and spatially distributed hydrological models in particular for TWS and discharge time series reconstruction. We also briefly examine the conditions under which the linear models can be used to do hydrology backwards, that is, estimating simultaneously the TWS and the unknown input precipitation minus evapotranspiration from discharge records.
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites: Sentinel-3A and Sentinel-3B, launched respectively on 16 February 2016 and 25 April 2018. Among the on-board instruments, the satellites carry a radar altimeter to provide operational topography measurements of the Earth’s surface. Over Inland waters, the main objective of the Sentinel-3 constellation is to provide accurate measurements of the water surface height, to support the monitoring of freshwater stocks. Compared to previous missions embarking conventional pulse limited altimeters, Sentinel-3 is measuring the surface topography with an enhanced spatial resolution, thanks to the on-board SAR Radar ALtimeter (SRAL), exploiting the delay-Doppler capabilities.
To further improve the performances of the Sentinel-3 Altimetry LAND products, ESA is developing dedicated and specialized delay-Doppler and Level-2 processing chains over (1) Inland Waters, (2) Sea-Ice, and (3) Land Ice areas. These so-called Thematic Instrument Processing Facilities (T-IPF) are currently under development, with an intended deployment by mid-2022. Over inland waters the T-IPF will including new algorithms, in particular the hamming window and the zero-padding processing. Thanks to the hamming window, the waveforms measured over specular surfaces are cleaned from spurious energy spread by the azimuth impulse response. The zero-padding provides a better sampling of the radar waveforms, notably valuable in case of specular energy returns.
To ensure the missions requirements are met, ESA has set up the S3 Land Mission Performance Cluster (MPC), a consortium in charge of the qualification and the monitoring of the instrument, and core products performances. In this poster, the Expert Support Laboratories (ESL) of the MPC present a first performance assessment of the T-IPF level-2 products over inland waters. The analyses presented are made over a large set of worldwide lakes and rivers. Comparisons with InSitu datasets, for example benefiting from the contribution of the St3TART project, will provide an estimate of the topography precision, and will be discussed for rivers and lakes of various sizes. Inter satellite comparisons are also in the scope of the studies and Water Surface Height estimates consistency in between Sentinel-3 and ICESat-2 will complement this analysis.
The quality step-up provided by the hydrology thematic products, and highlighted in this poster, is a major milestone. Once the dedicated processing chain is in place for the inland waters acquisitions, the Sentinel-3 STM level-2 products will evolve and improve more efficiently over time to continuously satisfy new requirements from the Copernicus Services and the scientific community.
In the frame of the ESA HYDROCOASTAL project, led by Satoc Ltd, a Test Dataset (TDS) is being developed by partners starting from Altimetry L1A data products (Sentinel-3, CryoSat-2) up to L2 covering the coastal and inland water domains. Then, specific products are targeted to the monitoring of inland water: L3 for the river and lake water level estimations and L4 for the estimation of river discharge.
The TDS is a test-benchmark run focusing on selected region of interest. It serves to perform extensive validation activities in the objective to qualify and quantify the quality of the various output products (L2, L3 and L4). Outcomes of this activity will serve to validate the methods and algorithms to be adopted by the team before the project initiates the production of Globally Validated Products (GVP).
This presentation focuses on the results obtained at L3 for the estimation of river water level.
Partners involved in the production of the L2 products all have developed and implemented specific waveform retracking algorithms. Outputs from each of these retrackers, limited to those implemented over the inland water domain, are processed further on to produce L3 river water level estimates.
Results will describe the performance of each of the L2 retrackers, from the L3 "point of view". The analysis will not only focus on the vertical accuracy of the data but also on the ability of the complete L1-to-L3 chain to produce consistent water level time series, considering the effective temporal sampling as another key indicator of the data quality.
The validation activity is done over the Amazon basin and involves the systematic comparison to in situ data from the ANA (Agência Nacional de Águas, Brazil).
Eventually, the results will highlight the strength and possible weaknesses of the retracking algorithms, helping to decide which retracker is eligible to be implemented in the production of the ultimate GVP datasets.
Across Iran, extraction of non-renewable groundwater has sparked water-related stress, increased salinisation of groundwater sources, and accelerated ground subsidence (Olen, 2021).
Both local and regional scale land-surface deformation has resulted from the decline in groundwater levels (Motagh et al., 2008). Moreover, the gap between groundwater use and renewal is so large that the resulting short-term impacts are likely to be irreversible (Olen, 2021). Quantifying the extents and rates of deformation related to groundwater extraction could therefore inform groundwater management approaches.
Here we present a catalogue of around sixty major, currently subsiding basins within the political borders of Iran. We use the COMET LiCSAR automated processing system to process seven years (2015-2021) of Sentinel-1 SAR acquisitions. The system generates short baseline networks of interferograms. We also correct for atmospheric noise using the GACOS system (Yu et al., 2018) and perform time-series analysis using open-source LiCSBAS software (Morishita et al., 2019) to estimate the cumulative deformation.
We also present vertical and horizontal velocity components of basin subsidence obtained through the decomposition of line-of-sight InSAR velocities. Subsiding basins are characterised and catalogued using the resulting interferogram time-series based upon the extents and rates of vertical motion associated with basin and agricultural areas. LiCSBAS time series analysis reveals maximal vertical rates of subsidence reach thirty-six centimetres per year in basins north-west of Tehran.
Finally, we present and demonstrate a beta version of the COMET-LiCS Sentinel-1 InSAR Subsiding Basin Portal. The portal aims to provide tools for the online analysis of automatically processed LiCSAR Sentinel-1 interferograms and subsequent LiCSBAS timeseries. The portal’s tools are designed to allow key stakeholders to search quickly through processed imagery and make critical assessments related to the extents and rates of basin subsidence. Initially the portal characterises Iranian basins but will increasingly have a global focus. Ongoing updates will be made to the portal’s interferograms and timeseries extended as more Sentinel-1 data is acquired.
Future work will focus on determining which basins are experiencing accelerating or decelerating subsidence rates. Ultimately, our quantification of ground deformation- particularly subsidence-related to groundwater withdrawal- could contribute to the development of a wider framework for monitoring complex risk pathways in similar, water stressed regions.
Subsidence is defined as gentle and graduate land surface lowering or collapse, which can be caused by either natural or anthropogenic activities. Land subsidence slowly proceeds due to sediment compaction under the pressure of overlying sediments. More intense motions in amplitude and time can be induced or worsened by human activity, such as groundwater withdrawal or underground mining.
Conventional geodetic measurement techniques have long been used for monitoring deformation processes. In particular, several methods including levelling, total station surveys, and GPS are still currently used for subsidence detection and monitoring. Nevertheless, these techniques are suitable only for local and site-specific analysis as they measure subsidence on a point-by-point basis, requiring a dense network of ground survey markers.
Space-borne radar interferometry approach allows measuring ground movement between two radar images acquired at different times on the same area, on a pixel-by-pixel basis. It is therefore remotely sensed, covering wide areas, quicker, and less labour intensive compared with conventional ground-based survey methods. More recently, radar interferometry rapidly growth and became a well-established Earth observation technique. The last decades witnessed a large exploitation of satellite InSAR (Interferometric Synthetic Aperture Radar) data.
The completeness of Web of Science (WoS) database was exploited to collect and critically review the current scientific state-of-the-art in the field of subsidence analysis using satellite InSAR in the last thirty years. From the pioneering work dating back to late nineties, the use of InSAR dramatically increased and thanks to the technological advances in terms of both acquisition sensors and processing algorithms, it is now possible to cover all the analysis of subsidence-related deformation at different stages, such as detection and mapping, monitoring and characterization, modelling and simulation.
This work is aimed to illustrate the role of satellite interferometry for subsidence analysis at a worldwide level over the last 20 years, highlighting current applications and future perspectives. From the WoS database original articles, book chapters, conference proceedings, and extended abstracts written in English and published by international journals after peer review where the authors exploited the InSAR technique to study subsidence were gathered. The data collection involved all contributions referring to every area in the world, looking for state by state. The data collection was operated in May 2021 and a final list of 766 contributions was retained. After the first skimming, each article was read and critically analysed in order to extract several information and identify the irrelevant contributions automatically extracted by the WoS database by the advanced search parameters. After this depth analysis, 73 contributions were further deleted by the list because they are related to volcanic systems, earthquakes or faults moving, or dam structures. In the end, the database was filled by the information extracted by 693 contributions.
Beside the general information about the selected contributions automatically downloaded from WoS (e.g., Publication Type, Authors and related affiliations, article Title, etc), it was necessary to insert new fields to catalogue the article to obtain a more detailed characterization:
- “CU” - Country Region of the authors;
- Area of Interest - localization of the study area investigated in the analysed work;
- SAR Satellite - list of the SAR satellite used to develop the investigation;
- Cause - list of the triggering factor of the subsidence as indicated in each contribution;
- Processing Technique - the processing technique adopted to retrieve ground deformation;
- Applications - the presented work were categorized according to the aim of the study
- Integration - the validation or integration, if any, with other data;
- Field evidences - the recording of damage on structure or ground.
The literature review highlighted that subsidence analysis covers 62 countries with at least one case study, by corresponding authors coming from 46 different nations. All the continents are covered, with the exception of Antarctica. The most represented country is China with 258 applications, followed by USA, Italy e Mexico with 59, 53 and 43 applications, respectively (Figure 1).
All the radar imagery archive were exploited to collect past and recent information on ground subsidence. The most used satellite platform turned out to be the C-band Envisat (counting 286 applications), followed by Sentinel-1 (154), ERS (153), L-band ALOS-1 (126), and X-band TerraSAR-X (95).
Concerning the image processing techniques adopted, either DInSAR (Differential InSAR) and InSAR algorithms were exploited, with 162 and 590 applications, respectively. Among the InSAR approaches SBAS (Small BAseline Subset) is represented by 187 case studies, PSInSAR (Persistent Scatterer Interferometry) is used in 146 circumstances while IPTA (Interferometric Point Target Analysis) is used within 36 contributions.
The triggering factors which resulted in land subsidence can be distinguished in two main groups, anthropogenic and natural. The first category counted 716 contributions, distributed mainly in groundwater exploitation (383), mining activities (137), urban loading (105), while in the second category the most represented factors are sediment compaction (117) and tectonic deformation (19).
Finally, the main aim of each contribution was identified and as a result 290 works dedicated to the monitoring of the subsidence phenomena, 171 devoted to the precise mapping of the extent of the vertical displacement and in 78 cases InSAR data were implemented for the modelling of the deformation were counted.
Therefore, satellite InSAR largely demonstrated to be highly valuable for subsidence studies in different setting and for different purposes, providing insights on slow-moving subsidence deformation mechanism. The upcoming European–wide EGMS (European Ground Motion Service), whose first baseline release is foreseen for the end of 2021, will represent a fundamental support for land subsidence analysis, providing a valuable flood of information regarding surface vertical deformation.
Ground motion, such as land subsidence, can be due to human (groundwater extraction, artificial loading) and natural causes. The latter are related to the geological setting, properties of soil and also to climatic stress (drought periods) and they can affect human-induced subsidence. It is possible that more than one cause contributes to the ground deformation, and it can be difficult to determine and quantify the contribution of each cause. In addition, also the socio-economic development factors, such as the increase in water demand and urbanization and population growth, can contribute to worsening subsidence, especially regarding subsidence due to groundwater extraction which is often overexploited. InSAR data can be a valid support in the study of the ground movements, providing valid products (starting from time series up to displacement maps) that can cover wide areas even where in situ monitoring instruments may be missing. This work will focus on the analysis of A-DInSAR time series by applying several methodologies and additional factors, such as analysis of topography, lithology, land use and geological setting, will be taken into account. In particular, ONtheMOVE methodology (InterpolatiON of InSAR Time series for the dEtection of ground deforMatiOn eVEnts) will allow to classify the A-DInSAR TS trend (uncorrelated, linear, non-linear) and to identify areas with non-linear target clusters. Wavelet analysis and Independent Component Analysis will be performed both on A-DInSAR data and piezometric data in order to unravel and correlate the main components of both TS. Satellite data that will be used cover a period from 2015 to 2021 in a test site in Brescia province (Lombardia region, Italy), in which subsidence is related not only to groundwater extraction but also to compaction of clay, peat oxidation and compaction due to artificial loading. The results of this study will contribute to improve the knowledge of ground deformations in the test site, and they will be helpful in the characterization of aquifer parameters to fill gaps in data especially when in situ monitoring systems are scarce.
The Willcox Basin, located in southeast of Arizona, USA, covers an area of approximately 4,950 km2 and is essentially a closed broad alluvial valley basin. The basin measures approximately 15 km to 45 km in width and is 160 km long. Long-term excessive groundwater exploitation for agricultural, domestic and stock applications has resulted in substantial ground subsidence in the Willcox Groundwater Basin. The land subsidence rate of the Willcox Basin has not declined but has rather increased in recent years, posing a threat to infrastructure, aquifer systems, and ecological environments.
In this study, spatiotemporal characteristics of land subsidence in the Willcox Groundwater Basin was first investigated using an interferometric synthetic aperture radar (InSAR) time series analytical approach with L-band ALOS and C-band Sentinel-1 SAR data acquired from 2006 to 2020. The overall deformation patterns are characterized by two major zones of subsidence, with the mean subsidence rate increasing with time from 2006 to 2020. The trend of the InSAR time series is in accordance with that of the groundwater level, producing a strong positive correlation (≥0.93) between the subsidence and groundwater level drawdown, which suggests that subsidence is a result of human-induced compaction of sediments due to massive pumping in the deep aquifer system and groundwater depletion.
In addition, the relationship between the observed land subsidence variations and the hydraulic head changes in a confined aquifer in accordance with the principle of effective stress and hydromechanical consolidation theory. Therefore, integrating the InSAR deformation and groundwater level data, the response of the aquifer skeletal system to the change in hydraulic head was quantified, and the hydromechanical properties of the aquifer system were characterized. The estimated storage coefficients, ranging from 〖6.0×10〗^(-4) to 0.02 during 2006-2011 and from 〖2.3×10〗^(-5) to 0.087 during 2015-2020, signify an irreversible and unrecoverable deformation of the aquifer system in the Willcox Basin. The reduced average storage coefficient (from 0.008 to 0.005) indicates that long-term overdraft has already degraded the storage ability of the aquifer system and that groundwater pumping activities are unsustainable in the Willcox Basin. Historical spatiotemporal storage loss from 1990 to 2020 was also estimated using InSAR measurements, hydraulic head and estimated skeletal storativity. The estimated cumulative groundwater storage depletion was 〖3.7×10〗^8 m3 from 1990 to 2006.
Understanding the characteristics of land surface deformation and quantifying the response of aquifer systems in the Willcox Basin and other groundwater basins elsewhere are important in managing groundwater exploitation to sustain the mechanical health and integrity of aquifer systems.
Groundwater has been extracted in the municipality of Delft since 1916. The extraction used to be owned by a privately owned yeast factory, but when the company recently seized production, the extraction was transferred to the municipality of Delft. Though the extraction has no functional use at this time, the city of Delft is worried that due to the size of the extraction, which is currently 1200 m3 (about an olympic swimming pool) per hour, stopping the extraction will have a big effect on the buildings in the historic 16th century city centre. The current annual costs of water disposal for the municipality are €2.5 million.
Since 2017 the municipality is slowly phasing out the extraction. The reduction in groundwater extraction must be carefully controlled to avoid an abrupt rise of the surface level due to swelling of the ground and consequent damage to infrastructure. To monitor the effects of the reduction, extensive measurements like groundwater levels have been collected since 2010.
It is estimated that the shutdown of these wells over a 30 year period could lead to ground swelling of more than 10 cm near the extraction wells. Abrupt and uneven swelling can cause damage to buildings, and infrastructure like tunnels, parking garages etc. There are 70,000 buildings in the 5 km surrounding the extraction site, including a range of irreplaceable historic buildings.
To supplement the groundwater level measurements, InSAR is used to monitor the changes in the uplift of the soil (since 2014) and guide the speed with which the groundwater extraction is reduced. Due to the wide extent of the impacted area, local levelling campaigns did not cover the full extent of the impacted area. Prior to the reduction in groundwater abstraction, the area was subsiding -1 to -2 mm/yr. Between 2016 to 2019 displacement rates remained fairly stable, with a gradual reduction in subsidence rates from -1 to -2 mm/yr to to 0.0 mm/yr.
However, from June 2019, the ground started to swell locally at +1 mm/yr. Using these swelling observations with InSAR, it was decided to pause the phase out during 2021, prolonging the significant costs of extracting water by another year. At the moment the area is being continuously monitored with InSAR measurements every 11 days, and early 2022 it will be assessed whether the phase out can be continued in 2022.
In semi-arid regions characterized by large agricultural activities, a high volume of water is needed to cover the water requirements for agricultural production. Due to low precipitation and the associated limited availability of surface water, aquifers often represent the main source of irrigation water in these regions. Mostly, the information about the abstraction of the groundwater resources and its management are not well investigated because of technical and financial restrictions. Thus, there is a high demand and need for improved and sustainable monitoring approaches.
Over the past decades, remote sensing has been established as an effective and powerful tool to monitor the planet’s surface. The Copernicus Program of the European Commission (EC) in partnership with the European Space Agency (ESA) offers strong possibilities in satellite based monitoring using remote sensing techniques. Since the first Sentinel mission has been launched in 2014, a solid database of satellite imagery with a high temporal resolution has been made available for everyone based on a free and open data policy. Particularly, Interferometric Synthetic Aperture RADAR (InSAR) techniques have gained a higher focus for groundwater management and may help to assess reliable information on the subsurface.
In this study in the framework of German-Moroccan international cooperation, the Chtouka region with its eponymous aquifer in South Morocco has been chosen. It represents a region with great importance for the export of agricultural products and the national trade balance and, therefore, depends on an anticipatory sustainable ground water management. In addition, high groundwater abstraction rates significantly change the flow dynamics of this coastal aquifer and lead to increased saltwater intrusion deteriorating the groundwater quality in the long term.
Sentinel 1 C-band data has been used in order to measure the ground displacement and the velocities over the past six years. Two smaller areas in the Chtouka region has been identified where interferometric analysis based ground deformation maps and available piezometric head measurements are investigated. Based on these observations, a correlation can be made between the ground motion and the change in groundwater level. The results can improve a sustainable groundwater management by directly quantifying the groundwater abstraction where in-situ data is insufficient and by filling gaps in monitoring data. In addition, simulations can be run to simulate future ground motions to support the regulation of the groundwater abstraction for agricultural purposes.
Land subsidence is a geological hazard, which can be induced by anthropogenic factors, mainly related with the extraction of fluids. The San Luis Potosí metropolitan area has suffered considerable damage induced by the overexploitation of the aquifer-system over the past decades. The city is placed on a tectonic graben delimited by mountain systems. The basin was filled over the years by pyroclastic material and alluvial and lacustrine sediments, which compose the upper aquifer, and the top layer of the deep aquifer. With a semi-arid climate and no permanent watercourse, the population water supply depends on small surrounding dams and groundwater resources. Owing to these conditions, nowadays 84% of the water demand in the valley is covered by groundwater. Consequently, the aquifer static level depletion has fallen up to 95 m from 1970s. The continuous decline increases the effective stress acting on unconsolidated Quaternary sediments, and therefore, the areas with higher accumulated thickness (up to 600 m in the center of the aquifer) consolidate. In this study the relationship between piezometric level evolution and land subsidence is analyzed. To this aim, we applied Coherent Pixels Technique (CPT), a Persistent Scatterer Interferometry (PSI) technique, using 112 Sentinel-1 acquisitions from October 2014 to November 2019 to estimate the distribution of deformation rates. Then, we compared the PSI time-series with the piezometric level changes using 24 wells records for the period 2007-2017. The results indicate a clear relationship between these two factors. The zones with the greatest drawdowns in the piezometric levels match those areas exhibiting the greatest thickness of deformable materials and maximum subsidence. Therefore, the storage coefficient (S) of the aquifer-system was calculated, using the vertical compaction (∆D) measured by means of PS-InSAR data, for a ∆h piezometric level change. The ratio of the change in displacement to the change in ground water level for the continuous and permanent drawdown represent the inelastic storage coefficient (Skv). Skv values obtained from this analysis show an agreement with previous in situ studies, highlighting the usefulness of PS-InSAR derived data to calculate hydrological parameters in detritical aquifers systems affected by land subsidence owing to groundwater withdrawal.
Land subsidence is a geological hazard characterized by the gradual downward movement of the ground surface. It can be induced by natural processes (e.g. tectonic, diagenesis) or human activities (e.g. subsurface fluid extraction). Extensive groundwater withdrawal from aquifer systems is the main factor causing land subsidence in areas where surficial water is scarce. Groundwater pumping causes a pressure decline in the sandy units and in adjacent unconsolidated deposits (aquitards). As a result, the stress exerted by the load of the overlying deposits is transferred to the grain-to-grain contacts increasing the effective intergranular stress. Depending on the compressibility of the soil, the depleted layers (aquifers and intervening aquitards) compact, thus causing land subsidence. Among other risks, compaction permanently reduces the capacity of the aquifer-system to store water. Therefore, assessing land subsidence is a key step to understand and model aquifer deformation and groundwater flow, which can help to design sustainable groundwater management strategies.
Advanced Differential Interferometry Synthetic Aperture Radar (A-DInSAR) is a satellite remote sensing technique widely used to monitor land subsidence. The Sentinel-1 mission, from the Copernicus European Union's Earth Observation Programme, comprises a constellation of two polar-orbiting SAR satellites that provide enhanced revisit frequency and worldwide coverage under a free, full, and open data policy. To handle and process huge and constantly increasing Sentinel-1 archive, the Geohazard Platform (GEP) on-demand web tool initiative was launched in 2015. In this online processing service, SAR images and A-DInSAR algorithms are located together in a friendly interface. The processing chains run automatically in the server with very little user interaction.
The GEP service is particularly useful for preliminary a land subsidence analysis, as all the data and technical resources are external and in addition the processing time is relatively fast. We tested different A-DInSAR algorithms (named Thematic Applications) included in the GEP to explore land subsidence in four water stressed aquifers around the Mediterranean basin. Located in Spain, Italy, Turkey and Jordan, they are characterized by largely different hydrogeologic features. These pilot sites are studied within the framework of the RESERVOIR project that aims to provide new products and services for a sustainable groundwater management model. This project is funded by the PRIMA programme supported by the European Union. The preliminary land subsidence results provided line-of-sight LOS velocity maps obtained from the GEP and allowed us to identify potential deformation over wide areas before carrying out more refined and conclusive A-DInSAR analyses.
Land subsidence triggered by the overexploitation of groundwater in the Alto Guadalentín Basin (Spain) aquifer system poses a significant geological-anthropogenic hazard. In this work, for the first time, we propose a new point cloud differencing methodology to detect land subsidence, based on the multiscale model-to-model cloud comparison (M3C2) algorithm. This method is applied to two airborne LiDAR datasets acquired in 2009 and 2016, both with a density of 0.5 p/m2. The results show vertical deformation rates up to 10 cm/year in the basin during the period from 2009 to 2016, in agreement with the displacement reported in previous studies. Firstly, the iterative closest point (ICP) algorithm is used in the point cloud registration with a very stable and robust performance. LiDAR datasets are affected by several source of errors related to the construction of new buildings and the changes caused by vegetation growth. The errors are removed by means of gradient filtering and cloth simulation filtering (CSF) algorithm. Other sources of error are related to the internal edge connection error in the different flight lines. To address these errors, the smoothing point cloud method by incorporating average, maximum and minimum cell elevation is applied. LiDAR results are compared to the velocity measured by a continuous GNSS station and an InSAR dataset. For the GNSS-LiDAR comparison, a velocity average from a buffer area processed in the cloud point dataset is applied. For the InSAR-LiDAR comparison a 100m*100m grid is computed in order to assess any similarities and discrepancies. The results show a good correlation between the vertical displacement derived from the three different surveying techniques. Furthermore, LiDAR results have been compared with the distribution of soft soil thickness showing a clear relationship. Detected ground subsidence is a consequence of the evolution of the piezometric level of the Alto Guadalentín aquifer system that has been exploited since the 1960s producing a great groundwater level drop. The study underlines the potential of LiDAR to monitor the range and magnitude of vertical deformations in areas that prone to be affected by aquifer-related land subsidence.
Groundwater plays a critical role for ecosystems and is a vital resource for humankind, providing one-third of freshwater demand globally. When groundwater is extracted unsustainably (ie. groundwater extraction exceeds groundwater recharge over extensive areas and for an extended period of time), groundwater levels inevitably decline and can lead to aquifer depletion, which can pose a risk to the sustainability of urban developments and any groundwater-dependent activities. In an ever-changing world, it is increasingly important to effectively manage our aquifer systems to ensure the longevity of the groundwater resources.
This work is undertaken as a partnership between the University of Pavia and the ResEau-Tchad project (www.reseau-tchad.org), with a focus on the urban area of N’Djamena, the fast-growing capital city of Chad. Groundwater contained within the phreatic and semi-confined aquifers underlying the city acts as the main source of water for a population of around one million inhabitants. With an annual growth rate of 7%, the reliance on groundwater for drinking and agricultural purposes is becoming more important. As a result, there is an increasing pressure on urban sanitation infrastructures that have failed to meet the current demand. Additionally, in recent years this area has experienced frequent flooding which is linked to the overflow of the Chari and Logone rivers and increased extreme precipitation events, which may be exacerbated by land subsidence induced by groundwater overexploitation.
Through the use of Advanced Differential Interferometric Synthetic Aperture Radar (A-DInSAR) techniques, land displacement in N’Djamena and the surrounding area has been spatially and temporally quantified for the first time. The current work aims to present two different InSAR processing techniques, Persistent Scatterers (PS) and Small BAseline Subset (SBAS), in a comparative way and also a preliminary analysis of spatial-temporal correlations between deformation measurements and groundwater levels to evaluate a possible cause and effect relationship.
InSAR is a technique that provides a measurement of ground deformation which, in the context of groundwater management, is controlled by the physical parameters of the aquifer such as soil compressibility, thickness, and storativity. Thus, while InSAR results enable large-scale, high resolution measurements of land displacement (on the scale of millimetres), InSAR-derived data itself is not directly quantitative without lithological knowledge of the subsurface. Therefore, the methodology developed to interpret the InSAR data and characterise the groundwater resources of N’Djamena is based on a multidisciplinary approach that integrates limited, in-situ hydrogeological measurements, including groundwater levels collected during a monitoring regime conducted from June 2020 to July 2021, along with the development of a three-dimensional subsurface lithological model based on the collection of available borehole logs and fieldwork validation.
To generate measurements of land displacement, both the PS and SBAS InSAR techniques have been applied to detect surface deformations in N’Djamena and its surrounding area. The PS-InSAR approach analyses interferograms generated with a common master image to produce a signal that remains coherent from one acquisition to another by exploiting temporally stable targets. Alternatively, the SBAS approach relies on small baseline interferograms that maximize the temporal and spatial coherence. In this work, both techniques have been applied in the study area using two time-series of descending and ascending Sentinel-1 Synthetic Aperture Radar images obtained from April 2015 to May 2021. The PS-InSAR technique mainly focuses on the urban area to obtain a high density of PSs, enabling more accurate land deformation measurements. The PS-InSAR vertical deformation rate ranges from -13 mm/yr to 21 mm/yr, while the SBAS values are in the range of -71 mm/yr to 32 mm/yr. The difference in velocity ranges can be explained by the different spatial coverage achieved by the two processing techniques, as the SBAS method provides results even over non-urban areas, which is where the higher displacement rates are estimated. The deformation rate maps obtained from the PS-InSAR and SBAS results are compared from a quantitative and qualitative point of view, taking into account the different types of movement derived from the techniques. The land deformation depicted for the urban area by the two processing techniques indicates a similar pattern of displacement (similar areas of subsidence and uplift). Although the pattern of displacement indicated by the two datasets is similar, the average velocity values obtained with PS-InSAR tend to be noisier than the ones derived using the SBAS technique, particularly when the SBAS time-series shows non-linear deformation trends.
The approach used in this work exploits advanced satellite-based Earth Observation techniques in order to gain further insight into the behaviour of the aquifer system in a region where hydrogeological monitoring is still largely absent. It is anticipated that the findings will help to improve the characterisation of the aquifer and groundwater resource management in the city of N’Djamena and could be further exploited for strategic decisions in sanitation risk management.
Abstract
Water scarcity is a constant concern for millions of people around the world without access to clean water. This reality is also verified in the city of Recife, located in the northeast of Brazil. The municipality is built on an estuarine plain composed of several rivers (Capibaribe, Beberibe, Tejipió). Over the past 50 years, population growth combined with periods of surface water crisis have significantly boosted groundwater use. The capture of this resource, however, occurs in an indiscriminate way in a large part of the city. Groundwater management is inefficient. The biggest limitation is evident in the control of wells, which are estimated at more than 13 thousand. Most are illegal and of unknown existence to inspection bodies. Over the decades, the weakness in groundwater management has contributed to the overexploitation of confined aquifers. The excessive removal of water resources from the subsoil has caused a reduction in the piezometric level to values above 100 m in the southern part of Recife, in the densely built-up neighborhood of Boa Viagem. This implies a strong risk of land subsidence. The geological phenomenon causes surface lowering and causes greater concern in urban areas. The deformation of the terrain can generate relevant impacts on infrastructure and the environment, causing economic and social damage, and compromising people's quality of life. In general, several cities around the world live with this situation. In addition to natural causes, the main occurrences result from human action in the accentuated exploitation of aquifers. The aim of this research is to use interferometry synthetic aperture radar (InSAR) to detect land subsidence in the coastal plain of Recife caused by the exploitation of groundwater resources. The use of this technology is seen as an innovation to current recurrent practices, based on terrestrial measurement techniques. The procedure is performed with a persistent scatterer interferometric (PSI), in the analysis of SAR data with a single-look complex (SLC) processing level, formed by satellite images: COSMO-SkyMed (ascending orbit, HH polarization, X-band), Sentinel -1 (descending orbit, VV polarization, C band) and PAZ (ascending and descending orbit, HH polarization, X band). Preliminary results reveal a correlation between land subsidence and reduction of groundwater in the southern zone due to water desaturation in the neighborhood of Boa Viagem, with a velocity close to -3 mm/year. Thus, the wide availability of interferometric data from satellite SAR missions associated with an advanced processing method should provide a better understanding of processes that generate superficial instability – such as the land subsidence. Using InSAR provides opportunities to test hypotheses and investigate situations that were previously unlikely due to the lack of adequate information. Its application opens the way for new perspectives in the study of the compaction of compressible sediments in the Recife coastal plain as a result of the decline in groundwater levels.
Keywords: land subsidence; groundwater; Recife; SAR interferometry
Cyanobacterial Harmful Algal Blooms are an increasing threat to coastal and inland waters. These blooms can be detected using optical radiometers due to the presence of phycocyanin (PC) pigments. However, the spectral resolution of best- available multispectral sensors limits their ability to diagnostically detect PC in the presence of other photosynthetic pigments. To assess the role of spectral resolution in the determination of PC, a large (N=905) database of co-located in situ radiometric spectra and PC collected from a number of inland waters are employed. We first examine the performance of select widely used Machine Learning (ML) models against that of benchmark algorithms for hyperspectral remote sensing reflectance (Rrs ) spectra resampled to the spectral configuration of the Hyperspectral Imager for the Coastal Ocean (HICO) with a full-width at half-maximum of < 6 nm. The ML algorithms tested include Partial Least Squares (PLS), Support Vector Regression (SVR), eXtreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP). Results show that the MLP neural network applied to HICO spectral configurations (median errors < 65%) outperforms other scenarios. This model is subsequently applied to Rrs spectra resampled to the band configuration of existing hyper- (PRecursore IperSpettrale della Missione Applicativa; PRISMA) and multi-spectral (OLCI, MSI, OLI) satellite instruments and of the one proposed for the next Landsat sensor. The performance assessment was conducted for a range of optical water types separately and combined. These results confirm that when developing algorithms applicable to all optical water conditions, the performance of MLP models applied to hyperspectral data surpasses that of those applied to multispectral datasets (with median errors between ~ 73% and 126%). Also, when cyanobacteria are not dominant (PC:Chla is smaller than 1), MLP applied to hyperspectral data outperforms other scenarios. The MLP model applied to OLCI performs best when cyanobacteria are dominant (PC:Chla is equal or greater than 1). Therefore, this study quantifies the MLP performance loss when datasets with lower spectral resolutions are used for PC mapping. Knowing the extent of performance loss, researchers can either employ hyperspectral data at the cost of computational complexity or utilize datasets with reduced spectral capability in the absence of hyperspectral data.
Large and globally representative in situ datasets are critical for the development of globally validated bio-optical algorithms to support comprehensive water quality monitoring and change detection using satellite Earth observation technologies. Such datasets are particularly scarce and geographically fragmented from inland and coastal waters. This is at odds with the importance of these waters for supporting human livelihoods, biodiversity, and cultural and recreational values. These shortcomings create two challenges. The first and major challenge is to collate these datasets and assess their compatibility concerning methodologies used and quality control procedures applied. The second challenge is to identify biases and gaps in the global dataset, in order to better direct future data collection efforts.
Our ongoing effort is to improve the availability of such datasets by providing open access to a large global collection of hyperspectral remote sensing reflectance spectra and concurrently measured Secchi depth, chlorophyll-a (Chla), total suspended solids (TSS), and absorption by colored dissolved organic matter (acdom). This dataset represents an expansion of data originally collated for a collaborative NASA-ESA-led exercise to assess the performance of atmospheric correction processors over inland and coastal waters (ACIX-Aqua). Its suitability for the development of globally applicable algorithms has been demonstrated by its use for developing novel approaches for the retrieval of Chla and TSS concentrations from a range of satellite sensors.
Our dataset contains relevant entries from the commonly used SeaWiFS Bio-optical Archive and Storage System (SeaBASS) and Lake Bio-optical Measurements and Matchup Data for Remote Sensing (LIMNADES) data archives and, in return, contributes thousands of new entries to these and other repositories. It encompasses data from inland and coastal waters distributed across five continents and a comprehensive range of optical water types. Our accompanying biogeographical data analysis contributes to a value-added dataset to aid in the identification of underrepresented geographical locations and optical water types, useful for targeting future data collection efforts.
To ensure the ease of use of this dataset and support the analysis of uncertainties and algorithm development, metadata covering the viewing geometry and environmental conditions were included in addition to hundreds of matched scene IDs for a number of multispectral satellite sensors (e.g. roughly 450 clear-sky match-ups for Landsat 8’s Operational Land Imager (OLI)), making it easier to validate algorithm performance in practical applications.
In curating this dataset, we had to overcome considerable challenges, including technical difficulties, such as variable measurement ranges of instruments, and others due to the fact that the data originated from a community-initiative of multinational researchers working on projects with a diverse range of objectives. Substantial data harmonization efforts to align different instrumentation, field methodologies, and processing routines were needed.
We conclude, our effort was a very worthwhile undertaking as demonstrated by a series of novel contributions and the publication of eight peer-reviewed research articles (at the time of writing). We expect that open access to this dataset will support the development of increasingly data-intensive algorithms for the retrieval of water quality indicators, including those for next-generation hyperspectral satellite sensors, e.g. sensors from the upcoming Surface Biology and Geology (SBG), Environmental Mapping and Analysis Program (EnMap), PRecursore IperSpettrale della Missione Applicativa (PRISMA) Second Generation (PSG), Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), and FLuorescence EXplorer (FLEX) missions. We believe that this will stimulate the discussion of a framework for the future collection of fiducial reference data towards global representativeness.
The objective of this work is to develop new classifications of optical types of water, using remote sensing reflectance (Rrs) measured by satellite as basis. The Rrs of several lakes with different water types are selected and labelled by an expert, identifying each optical water type (OWT) manually. The area of study are different reservoirs and lakes on the eastern Iberian Peninsula, and the Rrs are extracted from Sentinel 2-MSI atmospherically corrected imagery.
The OWT classifiers used here are framed in what is known as supervised classifiers, since they use the previous information given by the user to determine the classes to be detected. In order to classify these data we need atmospherically corrected images, for which the Case 2 C2RCC (Case 2 Regional Coast Color) algorithm developed by Doerffer et al. (2016) and available at SNAP has been applied. The collection of the refectance samples for training and testing has also been carried out using SNAP GUI.
Jupyter Notebooks are in place for the training, testing, application and validation of the models. The classifications generated can help to better understand the seasonal and spatial variations of the studied water masses, being a basic support in the monitoring programs of lakes and reservoirs. It is possible to use the OWT classification as final products to analyse changes in water types related to the different water dynamics of the lakes, or they can considered an intermediate products that could help in the subsequent selection of the water quality data extraction algorithm (for example, chlorophyll concentrations or total suspended matter) generated and adapted to specific types of water (Eleveld et al., 2017, Stelzer et al. 2020).
Results on the classifications tested, and validation of those results, will be analysed and considerations taken about the transfer learning to other lakes in Europe.
Over the last two decades, the primary focus of the development and application of remote sensing algorithms for lake systems was the monitoring and mitigation of eutrophication and the quantification of harmful algae blooms. Oligotrophic and mesotrophic lakes and reservoirs have consequently received far less attention. Yet, these systems constitute 50 – 60% of the global lake and reservoir area, are essential freshwater resources and represent hotspots of biodiversity and endemism.
Uncertainties associated with remote sensing estimates of chlorophyll-𝘢 (chla) concentration in oligotrophic and mesotrophic lakes and reservoirs are typically much higher than in productive inland waters. Uncertainty characterisation of a large 𝘪𝘯 𝘴𝘪𝘵𝘶 dataset (53 lakes and reservoirs: 346 observations; chla < 10 mg/l, dataset median 2.5 mg/l) shows that 17 algorithms, either recently developed or already well established, have substantial shortcomings in retrieval accuracy with logarithmic median absolute percentage differences (MAPD) > 37% and logarithmic mean absolute differences (MAD) > 0.60 mg/l. In the case of most semi-analytical algorithms the chla retrieval uncertainty was mainly determined by phytoplankton absorption and composition. Machine learning chla algorithms showed relatively high sensitivity to light absorption by coloured dissolved organic matter (CDOM) and non-algal pigment particulate absorption (NAP). In contrast, the uncertainties of red/near-infrared (NIR) algorithms, which aim for lower uncertainty in the presence of CDOM and NAP, were linked to the total absorption of phytoplankton at 673 nm and variables related to backscatter. Red/NIR algorithms proved to be insensitive to chla concentrations below 5 mg/l.
Bayesian Neural Networks (BNNs) for OLCI and the Sentinel-2 Multispectral Instrument (MSI) were developed as an alternative approach to specifically address the uncertainties associated with chla concentration retrieval in oligotrophic and mesotrophic inland water conditions (data from > 180 systems, n > 1500). The probabilistic nature of the BNNs enables to learn the uncertainty associated with a chla estimate. The accuracy of the provided uncertainty interval can be consistently improved when as little as 10% of the training data are set aside as a hold-out set. The BNNs improve the chla retrieval when compared versus established and frequently used algorithms in terms of performance over the expected training distribution, when applied to independent regions outside of those included the training set and in the assessment with OLCI and MSI match-ups.
Lakes play a crucial role in the global biogeochemical cycles through the transport, storage and transformation of different biogeochemical compounds. Furthermore, their regulatory service appears to be disproportionately important relative to their small areal extent. The global temperatures are expected to increase further over the coming decades, and economic development is driving significant land-use changes in many regions. Therefore, the need for improved understanding of the interactions between lake biogeochemical properties and catchment characteristics, as well as innovative approaches and techniques to get required high-quality information for large scale has never been greater. Unfortunately, only a tiny fraction of lakes on Earth are observed regularly and data are typically collected at a single point and provide just a snapshot in time. Using remote sensing together with high-frequency buoy measurements is one of the options to mitigate these spatial and temporal limitations. Until very recently, there have been no suitable satellites to perform lake studies on a global scale. The technical issues that were hampering remote sensing of lakes for a long time have been partly solved by the European Space Agency with the launch of Sentinel-2A in 2015 and Sentinel-2B in 2017 (S2). S2 covers the whole world, has a very good radiometric resolution and allows data acquisition at 10 m and 20 m resolution, which permits assessment of an unprecedented number of lakes globally. Still, remote sensing products of lakes have been rarely validated and often with poor results. The main problem is a lack of in situ data, which are needed for validating and improving remote sensing products. Using the high-frequency buoy measurements might be the solution. It will enable the validation of remote sensing products more accurately as it increases the probability of getting match-up data. Therefore, combining S2 capabilities, high-frequency measurements and conventional sampling data we firstly aim to estimate the biogeochemical properties (coloured dissolved organic matter, chlorophyll a, total suspended matter, primary production, dissolved organic carbon, total phosphorus and total nitrogen) in optically different European lakes to test which of the biogeochemical properties may be successfully estimated from the S2 data. Secondly, combining remote sensing capabilities with the increasing potential of Geographic Information System and land cover maps we aim to study the interactions between lakes biogeochemical properties, meteorological factors and catchment characteristics with high accuracy in large scales. The expected results will improve our understanding of the role of lakes in the global biogeochemical cycles and have a strong applied impact allowing to make reliable recommendations for decision-makers and lake managers for different ecological, water quality, climate and carbon cycle applications, and improving significantly the cost-efficiency of lake monitoring both regionally and globally.
In lake rich regions, protecting water quality is critically important because of the ecological and economic importance of recreational activities and tourism. To ensure the health of inland aquatic ecosystems on both a local and regional scale, more comprehensive monitoring techniques to complement conventional field sampling methodologies are needed for effective management. For over 25 years, our previous statewide water quality mapping in Minnesota, USA has primarily relied on Landsat satellites. However, measurements have been limited to water clarity and colored dissolved organic matter (CDOM) due to inherent Landsat sensor spectral band configurations. Sentinel-2 Multispectral Imager (S2/MSI) on the other hand offers several red-edge bands that increase the accuracy of chlorophyll concentrations. The increased temporal coverage of S2/MSI along with Landsat-8 Operational Land Imager L8/OLI and recently launched Landsat-9 L9/OLI-2 enables more frequent monitoring of Earth’s inland water bodies and permits routine mapping of water quality parameters.
To utilize these capabilities, we have developed field-validated methods and implemented S2/MSI and L8/OLI image processing techniques in an automated pipeline built in a high-performance computing environment that generates Level-3 (L-3) satellite data products for lake water quality monitoring and management. Machine-to-machine access to ESA Copernicus and U.S. Geological Survey servers allows for the synergistic acquisition of L-1 S2/MSI and L8/OLI imagery to supply the demand for near-real time data. Newly acquired imagery can be immediately sent through multiple scripted processing modules, which include (1) identifying and omitting potentially contaminated pixels caused by clouds, cloud shadow, atmospheric haze, wildfire smoke and specular reflection, and (2) classification of water pixels through a normalized difference water index (nNDWI) to delineate a scene specific water mask. The combined masks result in qualified pixels which advance to (3) a modified SWIR-based aerosol atmospheric correction to the retrieval of remote sensing reflectances (Rrs). The atmospheric correction produces a harmonized reflectance product between S2/MSI and L8/OLI pixels in which modeled L-3 type water quality data products are derived. Calibrated L-3 water quality models including water clarity, CDOM, and chlorophyll-a, rely heavily on field validated datasets to account for the dynamics of optically complex lake systems of the region. To this extent, sampling efforts in the summer months constrain uncertainties between satellite-derived and surface water properties caused by varying atmospheric conditions and calibrate/validate water quality retrieval algorithms to yield verifiable water products. As new field validation data become available at season-end, scripted modules within the processing chain can be modified accordingly and applied to incoming and previously processed imagery if any resulting water quality product models need improvement. Finally, the data can be made available to the public in an online map viewer linked to a spatial database that allow for statistical summaries at different delineations and time windows, temporal analysis and visualization of water quality variables. The Minnesota LakeBrowser (https://lakes.rs.umn.edu/) provides an example of the data that is being produced through this project. Due to the cloud cover in the Midwest, we determined that monthly open water (May through October) pixel level mosaics work best for statewide coverage. Lake level data is determined for each clear image occurrence and compiled in csv files that can be used to calculate water quality variables for different timeframes (e.g. monthly, summer (June-Sept)) and linked to a lake polygon layer that can be used for geospatial analysis and included in a web map interface. For Minnesota the lake level (2017-2020) data includes 603,678 daily lake measurements of chlorophyll, clarity and CDOM (1,811,034 total) and will be updated on a regular basis.
This unique data source dramatically improve data-driven resource management decisions and will help inform agencies about evolving water quality conditions statewide. In terms of decision-making, the production of frequent, near-real time data on water clarity, chlorophyll-a, and CDOM across large regions can enable water quality and fisheries managers to better understand lake ecosystems. The improved understanding will yield societal benefits by helping managers identify the most effective strategies to protect water quality and improve models for increased fisheries production.
The Finnish Environmental Administration has invested in and advanced the utilization of satellite observations to collect environmental data focusing on water quality. The Copernicus program, along with NASA Landsat-programme, provides long-term opportunities and perspective for this. Starting from 2017, Finnish Environment institute (SYKE) has been developing a publicly open TARKKA (https://syke.fi/TARKKA/en ) web map service through which users can utilize satellite observations. The TARKKA service focuses on providing water quality material and information for status assessment of Finnish water bodies. The need for the water quality monitoring via EO is heavily motivated by the extensive obligations of the EU directives (WFD*, but also MSFD**), the assessment of the state of the Baltic Sea (HELCOM*** holistic assessment, HOLAS), and the assessment of the impact of water protection measures. In Finland, the obligations set by EU for WFD reporting concern about 4500 lake and more than 250 coastal water bodies. As part of SYKE’s water quality EO development, a project named CorEO is working on diversifying and enhancing the water quality service based on satellite observations by introducing new analysis methods based on e.g., artificial intelligence. The improvements bring more user-orientation and visuality in TARKKA service.
In addition to the open TARKKA service, useful data on Finnish waters is also collected in a database available via STATUS interface, directed for the authorities responsible for directive reporting. The EO information database covers most of the water areas or bodies covered by the directives (especially the WFD). Although up to 70% of the satellite observations over Finland are partly cloudy, the database accumulates millions of observations from Finnish water areas every year. During the 3rd round of WFD reporting in 2019, Finnish authorities responsible for the lakes water bodies status assessment utilized EO as one source of information and found it to be beneficial to meet the requirements set by the directive. Approximately 40% of Finnish lake water bodies with WFD reporting obligations are included in the STATUS database. Finnish lakes represent wide range of optically complex waters – many of them are absorption dominated humic waters that form one extreme part of Case II waters. After the reporting, the database has been utilized to provide automated information on water quality and has been linked to various services providing information for citizens and authorities e.g. Marine Finland (https://www.marinefinland.fi/en-US/The_Baltic_Sea_now).
In the spring of 2020, automatic production of satellite observations was introduced, and proved to work fluently during the Covid-19 era; the processing, quality assurance and distribution of satellite observations went on schedule. As a side-result, the use of the TARKKA web service increased significantly during the spring and summer. Currently, the main challenge in data production is the vast and growing mass of observations as well as the development of related archiving and computing capacity to meet the future needs over the next ten years. For the following years, the development work will focus on improving the information content of existing services to become more user oriented. This includes development of methods that bring up the relevant part of the information related to water quality in various parts of the lakes in Finland from the vast amount of satellite observations. One of the first demonstrations for this was a service providing lake-specific information on cyanobacteria blooms over 43 Finnish lake districts in the summer 2021. For each of the lake district, the service provided also historical datasets as a background information, dating back to year 2013. From the user's point of view, it is useful to highlight the observations that illustrate the state of the areas requiring more attention or intensive monitoring. Recent development enhances the monitoring and surveillance of the state of the lakes based on satellite observations combined with other types of observation, like station water sampling and automated station observations.
In particular, the development focuses on the visuality and communicability of observations. One of the focus points is the development of automatic detection of sudden and long-term changes and identifying problem areas that require special attention (including nutrient sources, coastal estuaries, cyanobacteria). Anomaly tracking using artificial intelligence is another focus area for the development. In most cases, the spatial resolution of Sentinel-2-series MSI and Landsat-series OLI are sufficient to capture and identify the user needs like river water impact areas (turbidity and humus interlinked with nutrients), large and medium dredging areas, nuclear power plant condensate temperatures (TIRS-instrument), coastal, lake and offshore algae. The solutions enhance the introduction of data suitable for environmental monitoring in Finland.
*WDF: water Framework Directive, ** MSFD = Marine Strategy Framework Directive, ***HELCOM = Helsinki Commission, i.e. Baltic Marine Environment Protection Commission
Cyanobacteria are successfully growing in many waterbodies, causing potentially toxic surface blooms, hampering the recreational activities, impeding the water usage and causing problems for lake biota. Lake Peipsi is the largest transboundary waterbody in Europe, which consists of three parts: Lake Peipsi s.s., Lämmijärv and Lake Pihkva. Naturally occurring cyanobacterial blooms are a characteristical feature for this eutrophic lake, dominated by Gloeotrichia echinulata, Aphanizomenon, Dolichospermum and lately Microcystis with increasing abundance, especially in L. Lämmijärv. Regular national in situ monitoring covers Estonian side of the lake once per month from minimum 7 locations during vegetation period, but with in situ methods is complicated to give an overview of the bloom dynamics, its onset, the length of the bloom presence and its spatial extent. Remote sensing methods give complement information more frequently and allow better overview to the bloom on spatial scale. We used Sentinel 3 A and B/OLCI FR L1 images with MCI and regional conversion factors for Chlorophyll a (Chl a) concentration assessment, whereas time period of 2016-2021 was studied. Chl a values in Peipsi s.s. were generally lower (below 40 µg/L) in comparison to Lake Lämmijärv and Lake Pihkva, where higher values were present (> 75 µg/L and > 100 µg/L, respectively) during 2019-2021.
The threshold for cyanobacterial bloom presence/absence may be difficult to set, for example according to World Health Organisation the bloom starts already from 10 µg/L of Chl a, but for Peipsi this is not suitable, since majority of values measured during vegetation period are higher. As a lake-specific solution the presence of cyanobacterial blooms was assessed via taking lake-part specific long-term median value of Chl a from historical records (1984-2015) of in situ measurements for the period of June to September + 5%. Cyanobacterial bloom duration and extent differed in lake parts and between different years. Bloom generally started in Peipsi s.s. earlier than in other lake parts, and bloom duration was there longest, lasting >100 days with maximal coverage 68±19 % of the total lake area. Cyanobacterial concentration was higher in Lämmijärv, during the maximum extent of the bloom Lämmijärv was nearly entirely covered by cyanobacteria, with the exception of 2018, when coverage remained below 76%. During 2018 bloom coverage was also lowest in L. Pihkva ( < 30%). In general, the bloom duration in L. Pihkva was similar or shorter than in Lämmijärv, but with higher cyanobacterial biomass.
Atmospheric correction over inland and coastal waters is one of the major remaining challenges in aquatic remote sensing, often hindering the quantitative retrieval of biogeochemical variables and analysis of their spatial and temporal variability within aquatic environments. The Atmospheric Correction Intercomparison Exercise (ACIX-Aqua), a joint NASA – ESA activity, was initiated to enable a thorough evaluation of eight state-of-the-art atmospheric correction (AC) processors available for Landsat-8 and Sentinel-2 data processing. Over 1000 radiometric matchups from both freshwaters (rivers, lakes, reservoirs) and coastal waters were utilized to examine the quality of derived aquatic reflectances (ρ ̂_w). This dataset originated from two sources: Data gathered from the international scientific community (henceforth called Community Validation Database, CVD), which captured predominantly inland water observations, and the Ocean Color component of AERONET measurements (AERONET-OC), representing primarily coastal ocean environments. The volume of our data permitted the evaluation of the AC processors individually (using all the matchups) and comparatively (across seven different Optical Water Types, OWTs) using common matchups. We found that the performance of the AC processors differed for CVD and AERONET-OC matchups, likely reflecting inherent variability in aquatic and atmospheric properties between the two datasets. For the former, the median errors in ρ ̂_w (560) and ρ ̂_w (664) were found to range from 20 to 30% for best-performing processors. Using the AERONET-OC matchups, our performance assessments showed that median errors within the 15 – 30% range in these spectral bands may be achieved. The largest uncertainties were associated with the blue bands (25 to 60%) for best-performing processors considering both CVD and AERONET-OC assessments. We further assessed uncertainty propagation to the downstream products such as near-surface concertation of chlorophyll-a (Chla) and Total Suspended Solids (TSS). Using satellite matchups from the CVD along with in situ Chla and TSS, we found that 20 – 30% uncertainties in ρ ̂_w (490≤λ≤743 nm) yielded 25 – 70% uncertainties in derived Chla and TSS products for top-performing AC processors. We summarize our results using performance matrices guiding the satellite user community through the OWT-specific relative performance of AC processors. Our analysis stresses the need for better representation of aerosols, especially absorbing ones, and improvements in corrections for sky- (or sun-) glint and adjacency effects, in order to achieve higher quality downstream products in freshwater and coastal ecosystems.
AIM INTRODUCTION
Monitoring water quality is valuable since the changes that may occur in water bodies have severe socio-economic and environmental impacts. Such an influence is evident in Timsah lake, the biggest water body of Ismailia district in Egypt, which has been the objective of this research. The main aim of this research is to estimate the changes in the water quality of the area during the period of 2014-2020. Timsah Lake was subjected to significant environmental pressures, caused by various anthropogenic activities in Ismailia city. From satellite observations in the optical part of the spectrum, we can retrieve the concentrations of different constituents (pure water, chlorophyll, sediments, coloured dissolved organic matter) and also, we can use the satellite data to detect changes in the zone surrounding the water bodies.
Within the framework of increasing world trade, increases in the size of ships, and the need of the Egyptian economy to develop its resources, it was imperative to expand the current Suez Canal to cope with the increasing future world trade (EEAA, 2014). A new canal was implemented on the 5th of August 2014 parallel to the existing one; Suez Canal (SC). It is suspected that quality changes may be arisen due to the construction of the New Suez Canal (NSC). Timsah Lake has a strategic location on the Suez Canal, which is the main route joining the continent of Africa with Asia and Europe the evaluation of human activities, basically encompassing navigation as a pathway for trading ships with other countries, fishing which provides a vital source of food and income for local population and tourism.
DATA- METHODOLOGY
In order to achieve the goal of this study, free satellite images from Landsat 8 OLI and Sentinel-2 satellites have been exploited to analyse the objective of this research study. To be more specific, for Landsat 8 the Level-1 and Level-2 scenes were obtained from the United States Geological Survey (https://earthexplorer.usgs.gov). Landsat 8 carries the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) instruments.
In addition, Sentinel-2 is a European wide-swath, high-resolution, optical multi-spectral imaging mission. The full mission consists of the twin satellites flying in the same orbit but phased at 180°, is designed to give a high revisit time of 5 days and has 13 spectral bands (VIS, NIR, SWIR) (Drusch, et al., 2012). The Multispectral Instrument (MSI) Level-1C (L1C) Sentinel-2 scenes are available from the open-source of Copernicus Open Access Hub (https://scihub.copernicus.eu). Finally, the processing of the data has been carried out by the free and open ESA’s SNAP, the Harris Geospatial Solution’s ENVI and the final maps have been exported with the ESRI’s ArcGIS software.
As far as the methodologies are concerned, Principal Component Analysis (PCA) and Case 2 Regional Coast Colour (C2RCC) algorithms were applied for the purpose of monitoring the physical properties of different water characteristics encompassing pure water, chlorophyll-a, sediments, and Total Suspended Matter (TSM). In order to detect changes in the area of Ismailia city including the Suez Canal proximate area, Landsat 8 images have been used and the PCA analysis has been applied. PCA transforms an original correlated dataset into a substantially smaller set of uncorrelated variables that represent most of the information presented in the original dataset (Richards 1994; Jensen 2005). For the generation of the image-components we have adopted the PCA method by using the Landsat 8 L2 products dated 22/08/2014 and 06/08/2020 the visible and near-infrared bands, in total four bands from each date. The correlated variables (original bands) are transformed into other uncorrelated variables (principal components images). The maximum original information contained by these, with a physical meaning that needs to be explored. It is also proven, that the first three principal components may contain more than 90 percent of the information in the original seven bands.
Concerning the C2RCC, the objective of this algorithm is to determine optical properties and the concentrations of constituents of case 2 waters. Case 2 water is defined as a natural water body, which contains more than 1 component of water constituents, which determine the variability of the spectrum of the water leaving radiance and is presented in coastal seas, estuaries, lagoons and inland waters (Morel & Prieur, 1977; Gitelson et al. 2007). For this study, 7 Sentinel 2 A & B, Level 1C products were used and acquired by the satellite during August month each year from 2015 to 2020. Also, Landsat 8 Level 1 products, for month August 2014 and 2020 have been processed using the C2RCC. The processing of the Sentinel 2 and Landsat 8 images could be divided into two steps. The first one is the pre-processing including resampling (in this case resample in 40 m/pixel) so that all bands have the same spatial resolution and then is the subset of the image over the study area. The second one concerns application of the C2RCC algorithm in order to detect the amount of seawater suspended matters and chlorophyll-a and also perform the atmospheric correction. The final product is a thematic facilitating the knowledge content the unit is cubic gm-3 which is unit of volume cubic gigametre. Finally, the data were exported in GeoTIFF format and then imported into ArcGIS software, where the final maps are produced.
RESULTS -DISCUSSION
The results of the present work and the figures that emerged revealed that the original hypothesis of this research was proven correct, which presumed that the main source of pollution was the NSC. The obtained values of the different water characteristics that the Western Lagoon -located to the Western part of Timsah lake and connected with it through one inlet in the western side- and its emerging streams (Abu Atwa drain) are considered as contamination starting points. Generally, TSM and Chlorophyll-a results from Sentinel-2 data, indicate that during August 2015 there are lower values of TSM and Chlorophyll-a, with a range between 4 and 17 g m-3 and 2 and 11 g m-3, respectively. While the higher values of the TSM and Chlorophyll-a concentration, appears during August 2018 for the TSM, around 23 and 50 g m-3 and for Chlorophyll-a during August 2016 and 2018, around 20 and 40 g m-3. Specifically, high levels of TSM are evident and concentrated in the Western Lagoon. These values are at their pick especially during August 2018, 2019 and 2020. Regarding the Landsat-8 results, the TSM in August 2014 is lower than the following periods until 2020, where higher values are attributed mainly to the Western Lagoon and the inlet of the Timsah lake. For Chlorophyll-a, the highest value is recorded in 2014 than any other year, concentrated in NSC and the Western Lagoon.
Concerning the results of PCA, from the analysis of eigenvalues and eigenvectors, in combination with the interpretation of the principal components, the component images which are suitable in order to perceive the possible spatial changes, are selected. Moreover, the first component (PC1) corresponds to the brightness image (information concerning topography and albedo) and contains 78.35% of the information. The second component (PC2) calculates the spectral information related to all the transformations that took place during the period 2014 to 2020 and contains 11.12% of the information. There is the difference image between the two dates, resulting from the negative contribution of the original spectral bands of the first date (2014) and the positive contribution of the original spectral bands of the second date (2020). The last PC images contain a small amount of information relative to other applications and “noise” (Psomiadis et al. 2005).
To conclude, this study demonstrates a correlation between the results and the overall change in the area, since human activity and technological development is increasing. The contaminants of aquatic and wetland environments are a common situation due to the changes in the surrounding area. Such changes are tourism, agriculture, hotels, leisure infrastructures, which are built throughout the last years. As is confirmed from El-Serehy et al. (2018) and Abd El-Azim et al. (2018), there are three major sources responsible for water quality changes in Timsah Lake (area of interest) such as agricultural drainage, anthropogenic activities, untreated domestic and industrial waste discharges. Due to the lack of in situ data, the results cannot thoroughly confirm this hypothesis.
In line with the main goal, the recorded values of the TSM have shown that in the past years the connection between the Western Lagoon, the Abu Atwa Drain, Ismailia Canal and the Timsah lake appears to be the contamination sources that degrade the quality of water. This might reveal that Lake Timsah is a high eutrophic lake, as it was also pointed out by Mehanna et al. (2016). In our future work, we would like to add in situ data to prove our hypothesis.
Global warming affects ecosystems worldwide. Among others, climate change can trigger lake ecosystems shifts. Increasing trends for lake water temperature have been suggested from single case studies but also at the global scale. Increasing water temperature intensifies the thermal stratification of deep lakes, reducing the intensity of vertical mixing. Warming will thus likely alter the mixing regime of lakes substantially this century, as suggested by recent lake model simulations. This transition potentially leads to abrupt shifts in global lake ecosystems.
A reduction in mixing intensity and frequency can have severe implications for the entire lake ecosystem. For example, reduced deep water renewal hinders the vertical transport of oxygen from epilimnion to hypolimnion and can increase the extent and duration of seasonal hypoxia (low oxygen). Contrarywise, stratification suppresses nutrient resupply from the deep water to the surface layer. Both affect lake primary productivity and its entire food web. Records to characterize mixing regime anomalies and ecosystem shifts or their underlying mechanisms are scarce because they require long and dense time series, requirements often not met by traditional monitoring records.
We review and synthesize information on the detection of regime shifts in lakes worldwide. We suggest three main sources of data that can be used to detect lake ecosystem shifts: sediment coring, high-frequency in-situ measurements, and remote sensing. Remote sensing data started to be used for this purpose in a later stage but its potential to monitor several lakes at the same time allows studying a wider range of lakes. Our synthesis of the literature for more than 700 studies of lake regime shifts shows that to date remotely sensed time series of lake surface water temperatures (LSWT) have been mainly based on spatial averages of lake surface water temperature, neglecting the spatial dimensions of global LSWT products. However, the horizontal gradients could help a better understanding of the internal processes of lakes and the identification of lake mixing or ecosystem anomalies.
Seasonal overturning often occurs at different times across the lake. Thus, the spatial character of remotely sensed data can reveal important processes in freshwater systems and can help assess the long-term variability in the overturning behavior of large lakes in the context of climate change. However, limnologists have so far not extensively explored the spatially distributed character of remotely sensed data. We aim at developing a methodology to detect anomalies or shifts of lake ecosystems by using the spatial patterns of remotely sensed lake water properties (LSWT and ecological variables like turbidity and chlorophyll), and link such patterns to documented anomalies or shifts of lake ecosystems. Here, we exploit the CCI Lakes database from the standpoint of a limnologist, and with an advanced understanding of thermal forcing and ecosystem responses in lakes.
High-mountain lakes are among the most vulnerable ecosystems to climate change, particularly in Mediterranean climates. The Mediterranean region is suffering from an exacerbated rate of climate change compared to the global trends, being considered as a climate change hotspot in the world. The characteristic summer droughts of the Mediterranean climate are being intensified due to the increase in the mean annual air temperature, specially during the summer months, and the decrease in annual rainfall. As a whole, it may produce a decline in the snow accumulation in those regions and an earlier melting of the snow that could eventually affect the hydrology of the ecosystems, among other affected processes. Moreover, high-mountain ecosystems are experiencing elevation-dependent warming, whereby the warming rate is amplified with altitude.
Sierra Nevada National Park (Granada, Spain) is the southernmost mountain range in Europe and constitutes a biodiversity hotspot. In Sierra Nevada there are around 50 small glacially-formed lakes at an elevation of 2800-3100 m above sea level. They are small (surface ranges between 0.01-2.1 ha) and shallow (depth ranges between 0.3-8 m) oligo- to meso-oligotrophic lakes. Because of the high sensitivity of remote lakes to environmental changes, they are considered as excellent sentinels of climate change.
These lakes are hardly accessible. Hence, continuous and regular monitoring of these water bodies is difficult, but essential for their correct management. In this study we will mainly focus on chlorophyll-a (chl-a) since it is a key indicator of phytoplankton biomass and water quality. Remote sensing techniques represent an alternative to the current field samplings that are not always possible or not as frequently as desirable. One of the satellites with the greatest potential is the Multispectral Instrument (MSI) Sentinel-2, due to its high spatio-temporal resolution. Its spatial resolution of up to 10 m may allow the analysis of small waterbodies, and its revisit time of five days might allow the characterisation of temporal dynamics. However, a lack of data for such small lakes as ours has been detected, despite being very frequent ecosystems.
Hence, the aims of this work are (a) to explore the potential of remote sensing techniques using the Sentinel-2 imagery to estimate water quality parameters, mainly chlorophyll, in shallow and small high-mountain lakes in a regional scale and (b) to develop a chl-a estimate model as a tool for eutrophication monitoring since it may increase as a consequence of climate change. This may, in turn, allow the characterization of the lakes in terms of susceptibility to eutrophication since each lake is affected differently by variables such as livestock pressure, tourism, Sahara dust deposition and lake morphometry and watershed features. Finally, (c) this model is intended to be used by the management of the Sierra Nevada National Park and take necessary measures in order to maintain a good ecological status of the lakes.
Achieving our objectives would represent a major breakthrough since until now Sentinel-2 imagery has only been used for this purpose on lakes much larger and deeper than the small high-mountain lakes of Sierra Nevada. This work might represent a baseline for further remote studies of similar ecosystems.
The Sentinel-2 images were obtained and processed through Google Earth Engine (GEE). A first approach was made to select lakes with water-pure pixels. Sseven lakes met this requirement: Río Seco Lake, Yeguas Lake, Caldera Lake, Larga Lake, Mosca Lake, Vacares Lake and Caballo Lake.
Field sampling campaigns were conducted during the ice-free period of 2020 and 2021 obtaining 8 and 40 samples, respectively. An optimal time gap of ±3 days and a maximal of ±5 days were established between the in-situ measurements and the satellite overpass. In each lake an integrated water sample of 1.2 m was collected at a point in which the adjacency and bottom effects were minimized. The samples were stored in dark conditions until we arrived at the laboratory where the chl-a concentration, colored dissolved organic matter (CDOM) and total suspended solids (TSS) were analyzed. Chl-a and CDOM were determined through the filtration of the water samples using pre-combusted Whatman GF/F. Chl-a concentration was assessed by pigment extraction from the filter using ethanol and it was spectrophotometrically analysed. CDOM was determined spectrophotometrically from the filtered water. Finally, TSS were determined from the pre- and post-filtering Whatman GF/F filter weight.
Around 1500 papers relating chl-a and Sentinel-2 published until October 2021 were reviewed. It is worth noting that none of them was conducted specifically in high-mountain lakes. We selected and tested on Sierra Nevada the potential models that had already shown a good performance in oligo- and mesotrophic waters like ours. Traditional empirical, semi-analytical and novel machine learning models were tested. The use of machine learning, a type of artificial intelligence, is increasing in several scientific branches and it is in constant evolution. Hence, it represents a novel approach to chl-a retrieval and an increasing number of papers are showing its high performance on this purpose. According to the literature, the chl-a models that have performed best in waters with similar characteristics to ours use the red band (665 nm), red edge band 1 (705 nm) and red edge band 2 (740 nm). Some of these models are 2BDA (Moses, 2009), 3BDA (Gitelson, 2009), MCI (Gower, 2005) and Toming (2016), among others. However, the model FLH (Fluorescence Light Height) (Buma and Lee, 2020) has shown its high performance in similar waters and it uses the blue band (490 nm), green band (560 nm) and red band (665 nm). Finally, different atmospheric correction algorithms previously published in the literature were tested in combination with the developed chl-a models. Regarding the atmospheric algorithms cited in the bibliography, the algorithms that perform best in clear waters like ours are Polymer and iCor. The last one is the only one that includes correction for the adjacency effect, which is almost unavoidable in our study area. By contrast, the algorithm C2RCC has been shown to fail in presence of adjacency effect.
Taking into account the limited chl-a data taken in-situ during 2020, the model that performed best in our study area was FLH with a determination coefficient R2=0.59. By introducing the data collected during the 2021 field-sampling campaign and increasing the number of experimental replications as well as the number of sampled lakes it is expected to obtain a more accurate model for our study area. Moreover, more pre-existing models developed during 2021 will be tested and different atmospheric corrections will be introduced.
Monitoring of surface water quality is regulated by many national and European regulations and an important aspect of protecting aquatic ecosystems, achieving sustainable development goals, and supporting human well-being. Classical monitoring strategies target in situ monitoring and are time-consuming, cost-intensive and require well-trained personnel and sophisticated analytical labs. Remote sensing techniques can support and extend these monitoring efforts without raising costs and efforts dramatically. Indeed, many research projects convincingly documented that remote sensing is able to assess important water quality variables like turbidity, transparency, chlorophyll, humic substances, water temperature or cyanobacteria. In this context it is astonishing to note that remote sensing does not play a bigger role in governmental monitoring programmes and are hardly used by water managers, state authorities or communes. Why do these institutions do not make more use of the multiple opportunities provided by satellite observations and exploit data providing infrastructures like Copernicus Services, EO-Bowsers or institutional facilities in their governmental tasks?
In this talk we want to identify, explore, and discuss a multitude of reasons that may explain the discrepancy between the rich potential of remote sensing techniques for detecting inland water quality and their limited public utilization. We found a mixture of reasons that often act in concert and, for example, refer to lacking legislative framing, unknown transferability of methods between different kinds of water bodies, missing training and competences in authority staff, lacking harmonisation among states and countries, or the interpretation of remotely sensed data given their more complex data structures. We gained insights into these limitations from communications with German water authorities and European institutions as well as intense discussions with water experts.
We conclude by proposing a structured approach that helps authorities to use remote sensing products in their daily business. This approach includes a sound scientific basis of remote sensing products and harmonized procedures to implement remote sensing into governmental practices. We further report our experiences from co-developing this approach together with state agencies, reservoir authorities and other water-related institutions. This work is embedded in the Copernicus Programme via a research project funded by the German Federal Ministry of Transport and Digital Infrastructure. For further information refer to the webpage of the BIGFE-Project (https://www.ufz.de/bigfe/index.php?de=48596).
AquaWatch Australia intends to integrate Earth Observation (EO) and in situ sensors through an Internet of Things (IoT) connectivity to monitor and predict inland and coastal water quality and habitat condition for a wide range of uses in Australia and across the globe. AquaWatch Australia is designed to measure key aquatic environmental and biogeochemical variables required to understand processes affecting the quality of aquatic ecosystems to provide early warning of extreme events, accurate information on recovering or threatened ecosystems, and to help predict and manage water quality threats by empowering timely management decisions.
AquaWatch Australia is co-led by CSIRO (Australia’s national science agency), the SmartSat Cooperative Research Centre (SmartSat CRC), and other national and international partners, leveraging their longstanding expertise in EO, in situ sensing and modelling of water quality. After delivering an initial concept study (Phase-0), AquaWatch Australia has entered Phase A, including the consolidation of its end-user requirements. The plan is to translate these into systems requirements before entering a production phase, provided sustainable government funding is secured for the next development phases.
AquaWatch Australia was inspired by and follows many recommendations from the CEOS (2018) and IOCCG 2018 reports on Earth observation of Aquatic Ecosystems. The United Nations Sustainable Development Goals (UNSDGs) framework, through its Goal 6 (“Ensure availability and sustainable management of water and sanitation for all”), explicitly highlights the urgent necessity of securing better global access to clean water and its efficient use. Management of freshwater resources is a critical global issue and vital for landscapes, ecosystem functioning, biodiversity, agriculture, and communities.
One of the key proposals of AquaWatch Australia is to develop sovereign Australian capability to build, launch and operate a constellation of satellites optimised for monitoring aquatic systems. With the exception of the coarse spatial resolution GLIMR (a geostationary America’s located satellite), and PACE (for ocean-coastal to large inland waters system), most finer spatial resolution hyperspectral satellite missions including EnMAP, DESIS, PRISMA, SBG, and CHIME have primarily been designed to monitor the land. The sensors and satellites built for AquaWatch Australia will overcome some of the limitations of these sensor systems in terms of the necessary spectral bands, spatial resolution and, if possible, revisit times, which restrict the range of water quality and benthic parameters that can be measured from space as well as the number of waterbodies from which these parameters can be measured. The AquaWatch Australia system is designed to be highly adaptable and can use existing satellite datasets such as Sentinel 2, Landsat 8 and 9, as well as all relevant planned missions to enhance its final output. Combining data from these satellite sensors with the aquatic ecosystem-specific data from AquaWatch could provide opportunities for higher resolution water quality products, using data fusion or blending approaches as required by end-users.
AquaWatch Australia is currently establishing strategic partnerships and has invested in establishing a network of national and international pilot sites. Engaging with water quality researchers and end users across different regions in the world will enable the mission to expand the range of water quality conditions that can be accurately measured and predicted, demonstrate locally-applicable solutions and diversify use cases, and develop local capacity in EO monitoring and prediction to support national and global sustainability agendas. AquaWatch Australia is also intended as Australia's contribution to GEO-AquaWatch.
Phytoplankton and its most common pigment chlorophyll a (Chl a) are important parameters in characterizing lake ecosystems. We compared six methods to detect Chl a in two optically different lakes in boreal region: stratified clear-water Lake Saadjärv and non-stratified turbid Lake Võrtsjärv. Chl a was measured: in vitro with a spectrophotometer and high-performance liquid chromatography; in situ with automated high-frequency measuring (AHFM) buoys as fluorescence, and with a high-frequency optical hyperspectral above-water radiometer (WISPStation); and with various algorithms, applied on data from satellites Sentinel-3 OLCI and Sentinel-2 MSI.
The agreement between all the methods was from weak (R2=0.1) to strong (R2=0.96), while the consistency was better in turbid lake compared to the clear-water lake where the vertical and temporal variability of the Chl a was larger. The agreement between the methods depends on multiple factors. The radiometric measurements are highly dependent on the environmental and illumination conditions, resulting in higher variability in the recorded signal towards autumn. The effect of non-photochemical quenching (NPQ) correction increases with increased PAR and also is highly depended on the underwater light level, which resulted in up to 15% change in the chlorophyll fluorescence in more turbid conditions compared to 81% in clear water lake Saadjärv. Additionally, calibration datasets and applied correction methods required to account for the variability within phytoplankton amount and composition together with the background turbidity also had an effect on the the consistency of the final Chl a estimation.
Synergistic use of data from various sources allows to get a complex overview about a lake in horizontal and vertical scale but prior to merging the data, the method-based factors should be accounted for. These factors can have high impact on the results and lead to poor management decisions while switching approaches to analyze the Chl a patterns e.g. extending time series for estimating the status of the water body based on Chl a according to EU Water Framework Directive.
Water quality remote sensing is increasingly used in an operational context, and several studies in particular for perialpine lakes showed how hydrodynamic modeling can greatly improve the utility of remotely sensed products. Conversely, remotely sensed products can help to improve the performance of hydrodynamic models as a source of dynamic input data, by means of data assimilation, or for validation. With such an interdisciplinary integration of Earth observation techniques, we can take advantage of the forecasting capabilities of data-driven hydrodynamic lake modeling and the synoptic coverage, as well as a regular sampling of high-resolution satellite imagery, i.e., from Sentinel-2.
A first, operational framework that partially established the integrated usage of Earth observation data for Lake Geneva resulted from the ESA project CORESIM (www.meteolakes.ch). As part of the ESA Regional Initiative for the Alpine Region, the project AlpLakes aims to extend this framework functionally and spatially. The two main objectives of AlpLakes are to integrate Sentinel-2 transparency products in hydrodynamic models in order to improve their performance, and to update the models with a particle tracking module for validation with Total Suspended Matter (TSM) estimates from Sentinel-2 data. Ultimately, the project aims at understanding the short- and long-term evolution of the dynamics of freshwater systems with a particular focus on altitudinal and latitudinal gradients. For this purpose, we selected eleven lakes north and south of the Alps as test sites, covering a wide range of morphological and hydrological features, trophic status, and climatic conditions.
We use Sentinel-2 products to derive information of the light penetration and turbidity at a high temporal and spatial resolution. Our workflow is based on remote sensing image processing, field data acquisition, model setup and calibration via data assimilation, and real-time operational model publication on an open-access web-based platform. Sentinel-2 Secchi depth products obtained with state-of-the-art algorithms (e.g., QAA) will be validated with monitoring data. Dedicated field campaigns will be conducted to improve performance by means of generalized inherent optical properties for lakes in the Alpine region. Such products are crucial to constrain and improve the hydro-thermodynamic models, as transparency information is used in the heat fluxes models to parameterize the distribution of the incoming solar radiation in the water column and hence to correctly reproduce the lake thermal structure.
Similarly, existing algorithms for TSM retrieval will be tested and optimized for the use case of Sentinel-2 and the Alpine region. The resulting TSM maps are used to validate the simulated flow field and understand the transport dynamics in the lakes. To this aim, we use a Lagrangian particle tracking module coupled with the three-dimensional hydrodynamic model. Spatial patterns identified in Sentinel-2 images will serve as a proxy for the particle tracking seeding area and particles concentration. This allows tracking the evolution of detected spatial structures in Sentinel-2 image driven by turbulence and mixing processes in the lake. The accuracy of this method will be assessed by comparing the predicted evolution of particles paths with the succeeding Sentinel-2 TSM products.
For dissemination of and user interaction with the combined Sentinel-2 products and hydrodynamic simulations, we will provide hindcasting, real-time, and forecasting functionalities in a web-based platform on the basis of Datalakes (https://www.datalakes-eawag.ch/). This will allow open access to all results, provide a common tool for scientists, decision makers and the broader public, and improve the management of lakes in the Alpine lakes as well as the public perception of environmental processes in their immediate living space.
Water quality is a key worldwide issue relevant to human consumption, food production, industry, nature and recreation. In fact, monitoring and maintaining good water quality are pivotal to fulfilling the UN Sustainable Development Goals and enshrined in European policy through the Water Framework Directive (WFD) and the Marine Strategy Framework Directive (MSFD). Inland, transitional and coastal waters are increasingly threatened by anthropogenic pressures including climate change, land use change, pollution and eutrophication, some of which remote sensing can provide useful and continuous monitoring data and diagnostic tools for.
The European Copernicus programme includes satellite sensors designed to observe water quality and serves data and information to end-users in industry, policy, monitoring agencies and science. Three Copernicus services, namely Copernicus Marine, Copernicus Climate Change and Copernicus Land, provide satellite-based water quality information on phytoplankton, coloured dissolved organic matter, and other bio-optical properties in oceanic, shelf and lake waters. Though the transitional waters are partly covered by CMEMS coastal service, the approaches are distinct in the different services.
Responding to global needs, the H2020 Copernicus Evolution: Research for harmonised and Transitional water Observation (CERTO) project (https://www.certo-project.org/) is undertaking research and development to produce harmonised and consistent water quality data suitable for integration into each of these Copernicus services, and, thus, extend support to the large communities operating in transitional waters such as lagoons, estuaries and large rivers. This integration is facilitated by the development of the CERTO prototype, a Software-as-a-Service (SaaS) that contains modules on improved optical water classification, improved land-sea interface and atmospheric correction algorithms, and a set of selected indicators. The development of suitable indicators that respond to user needs is of utmost importance to demonstrate the added value of the CERTO upstream service to potential users and stakeholders in the downstream service domain. By providing a harmonised capability across the Copernicus services, the CERTO prototype will enable the evaluation of these indicators for the continuum from lakes to deltas and coastal waters and support intermediate and end-users in industry and policy sectors, while ensuring compliance with their own monitoring requirements.
To demonstrate the value of CERTO outputs, six case study areas are selected: i) Danube Delta; ii) Venice Lagoon and North Adriatic Sea; iii) Tagus Estuary; iv) Plymouth Sound; v) Elbe Estuary and German Bight; and vi) Curonian Lagoon. Eighteen local and national stakeholders in the six European countries where the CERTO case study areas are located have been interviewed to identify user needs in terms of the contents and relevance of the CERTO prototype.
Initial analysis of the user requirements collected points to a need for: i) improved products with respect to spatial and temporal resolution, ii) water quality indicators that aggregate data and iii) help in decision making and reporting, such as for the EU WFD and MSFD. To address these needs, several indicators are being developed within CERTO that use satellite-based estimations of water turbidity, suspended particulate matter and chlorophyll-a concentration, and include region-specific mean values, anomalies, percentiles (e.g., chlorophyll-a 90th percentile) and trends. Two indicators are based on turbidity and suspended matter and aim at aiding the planning and management of industry and local authorities: one will allow the analysis of the maximum turbidity zone (or high loads zone), the second one will aim at characterising the dredging events and impacts in the study areas. Another indicator based on the phenological analysis of phytoplankton blooms (i.e., bloom timing) that occur in these transitional regions is also under development. The aim of this indicator is to further understand ecosystem functioning and to provide support for the implementation of additional phytoplankton metrics for the EU WFD. Based on Sentinel-2 and Sentinel-3 data, these indicators are transferable and comparable across time and space and provided in near-real-time to provide faster response. In addition, a more complex indicator is under development, i.e., the Social-Ecological System Vulnerability Index (SESVI), which integrates local knowledge and data, 3rd party modelled and satellite data as well as CERTO outputs, to characterise the main pressures in the case study areas and highlight hotspots of vulnerability in lagoons and estuaries due to human pressure and climate change.
This paper presents the suite of CERTO indicators that aim to better support water resources management and decision-making, and shows the progress achieved thus far.
Chlorophyll-a concentration, as a proxy of phytoplankton biomass, is a key variable to monitor the highly dynamic transitional waters in coastal areas, often subjected to anthropogenic pressures that severely modify its ecological status. Sentinel-3 data is especially suited for this purpose in the large coastal lagoons characteristic of the Mediterranean floodplains. Its spatial, spectral and radiometric resolutions, together with a short revisit time, allow to monitor the spatiotemporal changes of the phytoplankton populations in these ecosystems in response to diffuse pollution, as well as the abrupt changes occurring after extreme meteorological events, such as floods, which alter its hydrodynamics and water composition.
We present the validation of the Sentinel-3 Chlorophyll-a concentration product ([Chl-a]), produced by the C2RCC processors available in the SNAP software, in two Mediterranean coastal lagoons in Eastern Spain: Albufera de Valencia, a shallow hypereutrophic brackish lagoon with an ongoing restoration plan to limit its nutrient content; and Mar Menor, a hypersaline mesotrophic lagoon which is undergoing an accelerated eutrophication, severely affecting its fisheries and its important recreational uses.
For this validation exercise, a set of 1413 and 185 in situ [Chl-a] samples were available in the Mar Menor and Albufera lakes, respectively. In the Mar Menor the in situ data was measured between August 2016 and October 2019, while for the Albufera the time span was between January 2016 and February 2018.
A total of 1142 Sentinel-3/OLCI images, from April 2016 to February 2020 were processed with C2RCC and filtered using the processor’s quality flags. The match-up points were statistically filtered and the correlation with the C2RCC [Chl-a] product was analyzed for the whole dataset, per lake and in the time series.
In the Mar Menor lagoon, with in situ [Chl-a] ranging from ~0.1 to ~25 mg·m-3, the C2RCC accuracy (R2= 0.69; RMSE= 4 mg·m-3) was acceptable for the spatial and temporal monitoring of this variable, closely following the time evolution of [Chl-a] in the studied period and identifying the bloom episodes and abrupt changes after flooding events. On the contrary, in the Albufera lagoon, C2RCC systematically underestimate [Chl-a] to an order of magnitude less than the in situ data, with large retrieval errors (R2=0.40 ; RMSE= 44 mg·m-3), precluding the spatio-temporal monitoring of [Chl-a] and suggesting that the C2RCC could not be appropriate for these eutrophic or hypereutrophic ecosystems.
Phaeocystis globosa is a nuisance haptophyte species that forms annual blooms in the southern North Sea and other coastal waters. At high biomass concentration, these are considered harmful algal blooms due to their deleterious impact on the local ecosystems and economy, and are considered an indicator for eutrophication. In the last two decades, methods have been developed for the optical detection and quantification of these blooms, with potential applications for autonomous in situ or remote observations. However, recent experimental evidence suggests that the interpretation of the optical signal and its exclusive association with P. globosa may not be accurate. Specifically, in the North Sea, blooms of P. globosa are synchronous with those of the diatom Pseudo-nitzschia delicatissima, that are found growing over and inside the P. globosa colonies. P. delicatissima is another toxic harmful bloom-forming species with similar pigmentation and optical signature to P. globosa.
In this study, we combine new and published measurements of pigment composition and inherent optical properties from pure cultures of several algal and cyanobacterial groups, together with environmental spectroscopy data, to identify the pigments generating the optical signals captured by two established algorithms: (1) The classification tree based on the positions of the maxima and minima of the second derivative of the water-leaving reflectance data; and (2) the Chlorophyll c3 (Chl c3) concentration estimation with a reflectance exponential baseline height. We further evaluate the association of those pigments and optical signals with P. globosa.
Our results show that the interpretation of the pigment(s) generating the optical signals captured by both algorithms were incorrect and that the published methods are not specific to P. globosa, even in the context of the phytoplankton assemblage of the southern North Sea. The positions of the maxima and minima in the second derivation of the water-leaving reflectance are defined by the relative concentrations of total Chl c and photoprotective carotenoids (PPC), and not Chl c3 and total carotenoids, as previously suggested. Similarly, the exponential baseline height captures the signal of total Chl c concentration, and cannot isolate the signal from Chl c3 due to the large overlap in the Soret band center position in the Chl c family. Additionally, the position of the minima and maxima of the second derivative can be affected by the presence of Chl b and environmental conditions influencing PPC concentration.
More fundamentally, we found that the optical and pigment signatures of Phaeocystis species are part of a broad pigmentation trend across unrelated taxonomic groups, related to chlorophyll c3 presence. Based on a large database of pigmentation patterns from pure cultures, we observed that the presence and amount of Chl c3 is positively correlated with the concentration of total Chl c and negatively correlated with PPC concentration. This has important consequences for the interpretation of pigment and optical data, particularly in environments where multiple species with similar pigmentation pattern co-occur, as observed in the southern North Sea during P. globosa blooms.
The available information on the relative contribution of cell biomass and pigments to the total pool from diatoms containing the Chl c3 and P. globosa suggests that it is not possible to unequivocally assert that the signal is generated by P. globosa. This is a consequence of year to year variation in the relative cellular biomass of these species during the bloom, the progressive colonization of P. globosa by P. delicatissima along the bloom development, and the low pigment to cellular biomass from P. globosa when compared to the diatoms.
We therefore propose and validated an algorithm to estimate the fraction of Chl c3 from the total Chl c pool, as it carries information on the presence of this pigment and the relative dominance of species presenting the pigmentation pattern of high total Chl c and low PPC. In the southern North Sea, this pigmentation pattern is only observed in P. globosa, P. delicatissima and Rhizosolenia species, the first two being HAB species and dominating the biomass and pigment signal. The Chl c3 fraction in the southern North Sea therefore can be interpreted as an indication of the the relative dominance of HAB species. The new algorithm suffers minimal influence from co-occurring pigments (e.g., Chl b, other forms of Chl c, carotenoids) and can be applied to absorption or reflectance data, with potential for application to the next generation of aquatic space-borne hyperspectral missions. We further elaborate general recommendations for the future development of algorithms for phytoplankton assemblage composition, considering the biology, ecology, optical signal and interpretation.
Abstract: Superficial aquatic environments, including oceans, lakes and rivers, contain a great diversity of particulate and dissolved materials. The water-leaving radiance is directly driven by the optical properties of those in-water materials interacting with light, also known as optically active water constituents (OAWC). In turn, their inherent optical properties (IOPs), such as the absorption coefficient or the scattering matrix, are dependent on the nature of the particles in suspension (i.e., microalgae, sediments). More precisely, the IOPs of suspended sediments depend on their mineralogy including the spectral complex refractive index and size distribution. Nevertheless, the relationship between remotely measurable water reflectance and the IOPs is still to be better elucidated in turbid and very turbid waters. One of the goals of this study was to reassess the IOPs-reflectance forward model over a wide range of water turbidity, and accounting for the polarized nature of light. Moreover, a special focus was paid to evaluate the role of the viewing geometry (sun and viewing angles, and relative azimuth angle between Sun and sensor) and to provide the uncertainty attached to such widely used forward model.
A second part of this work was dedicated to hyperspectral and multispectral analysis of the performances of retrieval algorithms based on the developed forward model. A specific inversion scheme was applied to a series of in situ data sets of moderate to highly turbid waters. Results showed the need to consider the actual multimodal size distribution and spectrally dependent refractive index to accurately reproduce hyperspectral observations. However, the presence of very coarse particles (> 20 µm) produces ambiguities in the retrievals due to their minimal contribution to the water-leaving radiance. Conversely, those findings demonstrate the sensitivity of the measured reflectance to size distribution, thus providing a framework for size distribution retrieval from space. Based on those results, we argue that physically based analysis of the signal remains a fundamental step to gain more genericity and applicability of suspended sediment retrieval algorithms enabling to reconcile the exponentially increasing number of regional algorithms.
The main objective of the H2020 funded project Water Quality Emergency Monitoring Service (wqems.eu/) is to provide operational water quality information to environmental authorities and water utilities industry in relation to the quality of the ‘water we drink’. To reach this goal, the project focuses its activities on monitoring of lakes using a variety of information sources.
The project includes altogether five pilot areas in Greece, Italy, Spain, Germany, and Finland. This work focuses on Lake Pien-Saimaa which is a medium-size lake in southeastern Finland. It is an important source of fresh water for the city of Lappeenranta with water intake located in the southern part of the lake. Lake Pien-Saimaa is fragmented and includes several islands, and it exhibits variable and site-specific water quality features. Lake Pien-Saimaa has substantial intrinsic value to the local population and its many small islands and beaches serve as a location for many holiday houses and recreational activities. The main anthropogenic pollution sources (e.g., phosphorus load) are the surrounding agricultural, and peat production areas and the industrial point-source of Kaukas pulp and paper mill. The main concern in the lake is monitoring algal blooms for early warning purposes.
The EO based data flow utilizes Copernicus Sentinel-2 images processed by the Finnish Environment Institute (SYKE). SYKE provides information on water chlorophyll a concentration and turbidity as maps and timeseries from small areas in various parts of the lake. In situ observations are gathered from bottle samples that are analyzed in laboratory and with instruments installed to an automated water monitoring station located near the water intake. These data are used for the validation and calibration of satellite data. The spatial and temporal behavior of water quality parameters is visualized for end users through TARKKA map service operated by SYKE.
The poster will present results from the EO processing, in situ data collection and the visualization of the results.
This project has received funding from the European Union’s Horizon 2020 Research and Innovation Action programme under Grant Agreement No 101004157.
Satellite images play a crucial role in monitoring Earth’s oceans, especially when it comes to oil spills. Traditionally, detection methods use Synthetic Aperture Radar (SAR) images that allow the detection of oil spills independent of clouds or daylight. However, SAR based methods are limited by wind conditions, as well as, look-alikes. Multispectral satellite images are perfect tools to fill this gap given that they allow the detection of pollution when weak or strong winds do not allow the use of SAR images. For this, a case of oil spill contamination is investigated in an inland lake in northern Greece using Sentinel-2 and PlanetScope multi-spectral images. This case is characterized by a small sample of known oil spills, making this study even more challenging. First, we implement different atmospheric corrections to acquire the remote sensing reflectance for the multispectral bands. Our sensitivity analysis shows that the detection capability for oil spills in not constrained only to the visible (VIS) part of the spectrum, but also extends to the Near Infrared (NIR), as well as, the Short Wavelength Infrared (SWIR). Among these, the NIR (833 nm) and Narrow NIR (865 nm) seem to have the largest sensitivity to fresh water oil spills. Additionally, the oil spills investigated tend to enhance the remote sensing reflectance for the NIR and SWIR part of the spectrum, but reduce it for the VIS bands, with the exception of Red (665 nm), which has a more ambiguous behavior. Given the reduced known oil spill cases (just two) for this study a pixel based machine learning approach is implemented instead of an object based one. Furthermore, the size of the oil spills will determine the choice of the bands, given that low resolution bands tend to reduce the pixel sample, and high resolution bands are limited by their availability (only four are available). Finally, the chosen bands are feed into a Deep Neural Network with two hidden layers for processing and the optimal hyperparameters are investigated. Despite the limited oil spill sample, the results are encouraging, showing a good detection capability.
Hydrological models are a widely used tool to explore which small- and large-scale interventions are suitable to effectively manage water resources, and to gain understanding of this coupled human-natural system. While many processes in the hydrological system can be generalized at large-scale, research in the realm of social hydrology has shown that many important decisions are made at the local level by a highly heterogeneous population, such as reservoir managers and farmers. Effective simulation of these decisions and their effects on the system, thus also involves simulating the coupled human-natural system and its feedback loops simultaneously at a local level and basin scale. Fortunately, an increasing number of high-resolution datasets have become available, for a large part driven by satellite observations, facilitating simulations at high resolution. Examples of datasets and methods include delineation of functional crop fields with machine learning using data from Sentinel, WorldView-3, and high-resolution SAR, associated field-scale availability of cropping and irrigation patterns, as well as high-resolution soil moisture data from downscaling of passive microwave observations and SAR.
Therefore, to capitalize on these advances, CWatM has been developed in several ways. First, we have enabled the ability of CWatM to be run at 30’’ resolution (< 1km at the equator), with examples in the Bhima basin (India), Burgenland (Austria), as well as in China and Israel. Associated developments include specific crops and fallowed land, calibrated reservoir operations, water distribution areas from reservoirs (command areas) or rivers (lift areas), canal leakage, as well as explicit source- and sector-specific water demands. An updated calibration scheme calibrates subbasins in a cascading fashion from upstream to downstream and generates parameter maps for each subbasin. The calibration scheme uses and evolutionary computation framework in Python (DEAP package) and the modified version of the Kling-Gupta Efficiency as objective function for comparing simulated with observed streamflow at subbasin scale.
Furthermore, we made advances in increasing the computational speed of CWatM. For example, CWatM now uses MODFLOW 6 through its Basic Model Interface (BMI) interfaced through Python (FloPy and xmipy packages). Using this method, we tested the high-resolution simulation of groundwater at 250m in the Bhima Basin (India) and 100m in Burgenland (Austria). Here, we use MODFLOW to represent physically one aquifer layer and to simulate groundwater interactions with soil and surface water bodies, as well as pumping demand at a daily timestep. In both areas, the model successfully reproduced observed water table better than at low resolution. In addition, many grid-based calculations, such as the soil-water balance can now be run in parallel on the GPU, enabling a tens of times faster resolution of the soil-water balance, depending on the hardware configuration.
Finally, IIASA developed, in collaboration with IVM-VU, an agent-based model (ABM) that simulates millions of individual farmers and their bi-directional interactions with CWatM at field scale, parameterized with the aforementioned high-resolution satellite products. However, because each agent requires their individually operated soil-water balance, a very high-resolution hydrological grid is required, limiting the ability of CWatM to be run in large basins. Therefore, to effectively manage this, we introduced land management units in CWatM. In this concept, CWatM is still run at the 30’’ resolution with 6 different land use types, but crop land use types are further subdivided based on land ownership types and become dynamically sized hydrological response units (HRUs) within the grid cell. These land management units can be independently operated by farmers through the ABM. In this manner, all land management practices (e.g., crop planting date and irrigation) and soil processes (e.g., percolation, capillary rise, and evaporation) are simulated independently per farmer, thus allowing simulation of multiple independently operated farms within a single grid cell. Runoff and percolation to groundwater are aggregated from all HRUs within a grid cell to simulate groundwater and river discharge at the grid scale. This enables CWatM to simulate the bi-directional interaction of individual farmers with the hydrological system and their adaptive behaviour at the true farm scale, while still allowing simulation of the hydrological process at basin scale. We show an example of ~11.1 million farming households in the Krishna basin in Indian, simulated on a personal laptop. Calibration with streamflow shows good model performance, and as a next step, we plan to further calibrate with high-resolution soil moisture products at ~100m resolution.
Cyanobacteria are a persistent problem in inland waters. They hamper the use of water for recreation and drinking purposes. Being odorous and foul-looking they are also an unwanted guest in urban waters. Water Insight has supported a number of water managers in the Netherlands and abroad to monitor the onset of cyanobacteria blooms and take early and appropriate action.
Based on simultaneous measurements of cyano-chlorophyll (with lab fluoroprobe) and optical Phycocyanin (WISPstation) we were able to establish a good relationship between the two parameters. This relationship could be extended to biovolume based on a large Dutch database of measurements. Water managers often prefer biovolume as indicator for the abundance of cyanobacteria. The general validity of the conversions should be investigated further.
We present 3 use cases explaining the usability of our concept to monitor the blooms with the most suitable combination satellite observations and our proprietary optical sensors.
Case 1: Bathing water monitoring
A WISPstation was used in the two small lakes (“Agnietenplas” and “Bosplas”) in the Netherlands to demonstrate the added value of continuous in-situ monitoring of the growth and decline of cyanobacteria. The optical data record clearly shows the added value of high frequency measurements opposite to a 2-weekly sampling frequency. Short-term peaks are recognised and the bathing water can be opened or closed on a daily basis instead of a 2 weekly basis.
Case 2: Determination of the representativeness of WFD monitoring stations
Lake Lauwersmeer suffers from high-concentration blooms. The purpose of the Water Framework Directive is to take measures to improve the water quality to a ‘good’ ecological status, however, sparse sampling and therefore less insight makes it difficult to take effective measures. In a pilot of H2020 e-Shape, satellite data was used to map the EO-base phytoplankton biomass for WFD reporting in Lake Lauwersmeer, and study the representativeness of the existing monitoring stations.
Case 3: Early warming for nuisance of blooms in a recreational harbour.
In this case in-situ optical measurements of the WISPstation serve two purposes: the high-frequent measurements are used as early warning of upcoming blooms, while the spectral data serve as calibration of atmospheric correction in this turbid and cyanobacteria infested lake Volkerak. Using this technique significantly improved the quality of Sentinel-2 BOA reflectances. Based on the early warning for blooms, the water manager temporary closes a small harbour preventing the nuisance blooms to enter.
EOMORES has received funding from the European Union’s Horizon 2020 research and innovation programme grant agreement 730066
e-Shape has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 820852
The European Union Water Framework Directive (and similar directives) requires countries to monitor and report on the ecological status of inland and coastal water bodies, through biological, chemical and physical indicators. Countries reporting on a large number of water bodies struggle to collect sufficient observations to represent seasonal and interannual variability, particularly in dynamic systems influenced by terrestrial runoff. Monitoring of certain indicators can be complemented using Earth observation to fill such observation gaps and inform better management. Satellite remote sensing is particularly complementary and can be achieved at relatively low cost, but water quality products from satellites are still limited to relatively wide, open water bodies.
In shallow coastal waters and intertidal zones, runoff from farming nearby land, water treatment works outflow and untreated sewage can lead to nutrient conditions in which macroalgae flourish and out-compete other beneficial plant life such as Zostera seagrasses. Such shifts can disturb local ecology and lead to loss of biodiversity, reduce important carbon sequestration and negatively affect the blue economy.
With the use of high resolution EO data such as Sentinel-2, together with image processing and machine learning techniques, we are able to observe the areal coverage of vegetation within a tidal lake in the southwest UK and estimate the seasonal variation in cover from a 5-year time series of Sentinel-2. Photography taken from unmanned aerial vehicles additionally provides a very high resolution (~4 cm) view for evaluation or creation of training data, and an estimate of macroalgae coverage itself. Quadrat surveys (which are the accepted reference method) in the region provide further information, but at limited spatial and temporal coverage. The differences in these three levels of observation (satellite, UAV, quadrat) are discussed with suggestions on how they might be reconciled in future so that the wide area and regular temporal coverage that satellites offer can be used in reporting.
Using a clustering approach with the Sentinel-2 data we were able to assign pixels within the lake to classes relating to mud, water and vegetation. Aggregating into seasonal periods suggests that the vegetation coverage within the lake ranges from approximately 5 % of the intertidal area in winter up to 60 % in summer. UAV data, which only covers a portion of the lake, suggests much lower summer coverage of 8 %, whilst the quadrat reference method reports 68 %. To be able to use EO techniques in future WFD activities these methods will need calibrating and agreement with relevant bodies so that the spatial and temporal benefits of EO data can be fully utilised.
Lakes perform a multitude of functions, from regulating water flow and quality to providing food and income from fishing and tourism. Lakes moderate the local climate and provide water for drinking and irrigation. All of these functions are being affected by human actions. Pollution by untreated waste water and fertilizer use cause eutrophication and disturb the ecosystem’s balance. Warming of the climate causes enhanced evaporation, changing precipitation patterns, and affects the stable layering of lakes, which, in turn, affects the ecosystem by changing the availability of oxygen and nutrients. And these effects influence each other in various and complicated ways. Thus the warming climate may exacerbate the influence of increased nutrient influx.
People living near lakes are directly affected by these changes, some of which can be observed using satellite instruments. Monitoring the quality of lake water can help understand processes leading to changes in lakes, which will aid the development of mitigation or adaptation strategies. Moreover, finding relationships between satellite data and disease or other risks allows their prediction, providing the basis for an early response.
Here, we present first results on the monitoring of the greenness of lakes for three different applications, each addressing an aspect of human health. First: Blue algae, or cyanobacteria, thrive in nutrient-rich, warm waters. In high numbers, they outcompete other algae and plants and have toxic effects on animals and humans. Second: The water hyacinth, which has evolved into a major disturbance to water traffic, fishery, and lake ecosystems within a few decades. Dense water hyacinth mats provide breeding grounds for vectors of malaria and leishmaniasis. Third: Phytoplankton, whose abundance was shown to be related to cholera incidences in various regions in Asia and Africa, as the bacterium responsible for the disease associates with phytoplankton.
Monitoring water turbidity in water bodies provides useful information on hydrological processes occurring at the watershed scale as well as on the state of aquatic ecosystems including bacteriological contamination. Quantification of suspended sediment is also important for reservoirs management since it allows to monitor silting that can affect dams functioning while providing important information for water treatment. Remote sensing provides a useful tool for monitoring inland water at the regional scale but only recent satellites provide the spatial and temporal resolution necessary to follow the dynamics of small water bodies.
This study is focused on the Sahelian region where ponds, lakes and reservoirs play a major role for populations. Given their small size, their important temporal variability, and the scarcity of in-situ monitoring network, their dynamics and water quality information is not available at the regional scale. In addition, Sahelian water bodies are very reactive to climate and human forcing and display complex and sometimes unexpected behaviours, like increasing trends in water area across the Sahel, which question their future evolution in a context of environmental changes and demographic increase.
We explore the capability of Sentinel2 optical sensor MSI to retrieve information on waterbodies variability at the large scale, using Google Earth Engine to processes several Sentinel2 tiles. Overall, 1672 Sahelian lakes are analysed and compared to other 5666 lakes in semi-arid regions worldwide.
Water reflectance in the visible and NIR bands significantly vary across different lakes and can reach extremely high values (higher than 0.4 in the NIR band) in some lakes, as for example for several lakes located in Niger which are among the brightest in the world.
In-situ measurements over some of these lakes highlights the high concentration of suspended particulate matter (SPM) which increases water reflectance. In addition, SPM are mainly composed of fine kaolinites , that display low absorption coefficient and so high reflectance. Finally the important fraction of very fine mineral particles (a major volumetric mode is found at 200-300 nanometers) may induce an increased diffusion and a higher back-scattering, which both contribute to increase reflectance. High aerosols conditions and sunglint effects are efficiently masked by the image processing and the post-processing applied (based on thresholds on the MNDWI index and on the reflectance in the blue band) and do not significantly affect the results reported.
At the regional scale the brightest lakes are identified as relatively small lakes situated in area with low vegetation cover, where erosion and sediment transport is more likely. However, the contrary isn’t not always true and low reflectance values can be encountered on small lakes in low vegetated areas, as it happens, for example, for lakes fed by water table or by the flooding of the Niger river.
Observations by high resolution remote sensors such as Sentinel2 are thus an efficient tool to derive information on water color spatial variability in relation to eco-hydrological characteristics at the regional scale.
Lake water quality is a key factor for human wellbeing and environmental health being affected by climate change, anthropogenic activities such as urban and domestic wastewater charges into in inflowing streams, as well as agricultural activities. Generally, in situ measurements are traditionally conducted and widely accepted as instruments for water quality monitoring. However, in many regions, classical monitoring capacities are limited and in case of large water bodies they lack monitoring at the required spatial and temporal scales. In this context remote sensing has great potential and can be used for assessing spatio-temporal dynamics of water quality in a cost- effective and informative manner.
Copernicus Sentinel-3 OLCI instrument launched in February 2016 covering 21 spectral bands (400-1200nm). It has been providing accessible products since 2017, which enables monitoring water bodies at 300m resolution and almost daily scales.
Lake Sevan (40°23‘N, 45°21‘E) is located in Gegharkunik province in Armenia at an altitude of 1900 m a. s. l.. Lake Sevan is Armenia’s largest water body and the largest freshwater resource for the whole Caucasian region. At present, lake water quality is progressively deteriorating due to eutrophication, water level fluctuations, and climate warming and suffers from massive cyanobacterial blooms. Hence, it is very important to study key water-quality variables (e.g. Chl-a, turbidity, harmful algae bloom etc.) describing the ecological status of the lake, and their annual and seasonal dynamics. We emphasize that remote sensing can significantly contribute to this monitoring demand.
In situ measurements having been implemented since 2018 in the frame of German-Armenian joint projects (SEVAMOD and SEVAMOD2) providing detailed information on water quality of the lake on a local scale. These data are, on the one hand, well suited for a basic characterization of the problem but do not provide enough data for tracking the qualitative changes of the water in space and time.
Hence, our research aimed at assessing the seasonal and spatial water quality dynamics in Lake Sevan over 5 years covering 2017-2021 by using Copernicus Sentinel 3 products.
The satellite data processing engine for water quality assessment eoLytics was used for Sentinel 3 data processing. EoLytics is based upon the MIP Inversion and Processing System, that was initially developed at DLR from 1996 and has been continued since 2006 by EOMAP. The fully physics based sensor-generic algorithm in MIP do not require in-situ data for calibration.
The full range of daily incoming time series for Lake Sevan that is free of clouds includes 474 scenes. Using EoLytics algorithms the data were processed for the water quality parameters: Chl-a, total suspended matter (TSM) and harmful algae bloom (HAB-Indicator). It is noteworthy that Chl-a and TSM processing algorithms provide fully quantitative outputs while the HAB algorithm provides a semi-quantitative indicator.
Field campaigns of in situ measurements have been conducted since 2018 on a monthly basis. andIn situ measurements are available at two locations for 2018-2020 years and at three locations for 2021.
This study envisaged the following steps: (i) the analysis of the annul and seasonal remotely sensed data in order to reveal the spatial-temporal characteristics of water quality; (ii) the comparison of in situ measured and remotely retrieved data via regression analysis in order to understand the relationship of the data received from the different sources.
The spatial-temporal analysis of the seasonal characteristics of Chl-a follows typical plankton succession dynamics in large lakes and usually shows maximum values in summer (June and July). However, the seasonal dynamics of Chl-a differ between years and are most likely driven by meteorological dynamics. This link between meteorological variables and plankton dynamics is a key aspect for climate impact assessments and are hardly visible in traditional monitoring data but well observable in remotely sensed data due to its higher temporal resolution. The spatial patterns in the lake point to a large influence from external nutrient inputs as high Chl-a values are repeatedly observed in the vicinity of polluted inflows.
The HAB repeats the overall seasonal trend we have for chlorophyll-a with the highest appearance in 2018, then a bit less in 2019. It follows with much more low appearance in 2020 and 2021. 2017 gives the lowest picture of the HAB occurrence.
An initial comparison of in situ measured and remotely sensed Chlorophyl-a via a linear regression revealed a significant relationship with a relatively low error (RMSE=0.403).
At this stage it can be concluded that the maximum of the Chl-a content during these 5-year period slightly shifted from August to June and was regularly associated with the formation of HABs. The validation of the remote sensing data needs ongoing efforts in order to facilitate a deeper analysis of seasonal trends and spatial-temporal patterns. The observations based on Sentinel-3 sensors provide extremely valuable information on water quality dynamics in Lake Sevan and complement the results from traditional monitoring. The long-term monitoring strategy should therefore exploit the strengths of both approaches and remote sensing is considered to be a key aspect of the foreseen monitoring program for this highly sensitive and important water body.
Acknowledgments
This study was supported by following projects:
1. SevaMod - Project ID 01DK17022 - Funding Institution: Federal Ministry for Education and Research of Germany "Development of a model for Lake Sevan for the improvement of the understanding of its ecology and as instrument for the sustainable management and use of its natural recourses"
2. “SevaMod2: Project ID: 01DK20038 - Funding Institutions: Federal Ministry for Education and Research of Germany and 91.5% of planned costs), Ministry of Environment of the Republic of Armenia (8.5% of planned costs) “Building up science-based management instruments for Lake Sevan, Armenia”
3. Project ID - 20TTCG-1F002 - Science Committee of the Ministry of Education and Science, Culture and Sport of RA "The rising problem of blooming cyanobacteria in Lake Sevan: identifying mechanisms, drivers, and new tools for lake monitoring and management"
4. Project ID - 21T-1E252 - Science Committee of the Ministry of Education and Science, Culture and Sport of RA "Assessing spatio-temporal changes of the water quality of mountainous lakes using remote sensing data processing technologies".
The Water Framework Directive (2000/60/EC) (WFD) states that all European Union (EU) members must implement the monitoring and estimation of the ecological status of their territorial inland water bodies. It demands to classify the status in 5 classes from “very bad” to “high” and aims to achieve at least “good” ecological status of inland waters by 2027 by all required means. However, monitoring of the several ecological parameters required by the WFD are based on in situ data sampling and further laboratory analysis, that are both time- and money-consuming. Hence, this cannot be achieved in timely and frequent manner at a country-scale. Therefore, satellite observation of water quality appears to be a promising and efficient tool to achieve the WFD requirements (Papathanasopoulou et al., 2019). Nonetheless, there is still a need to improve the accuracy of satellite-derived products used to classify the water bodies status, especially for each water quality parameters that can be remotely sensed: chlorophyll-a concentrations ([chlo-a]) values, Secchi-disk depth, turbidity, suspended matter concentrations. (Giardino et al., 2019).
In order to evaluate the relevancy of the current satellite products for ecological status monitoring, this study was based on numerous French lakes sites where field data were previously collected. A dataset was composed with in situ measurements from the french WFD regulatory monitoring network and the long-term Observatory on Lakes (OLA), as well as from other public institutes (research or territorial management). This dataset concerns ~325 sites from 2014 to 2017. Over the period covered by Lansat8 and Sentinel 2, this dataset includes ~1000 to 1900 chlorophyll-a concentrations ([chlo-a]) values, Secchi-disk depth, turbidity, suspended matter concentrations. Corresponding satellite data products were generated from Sentinel-2 and Landsat-8 imageries through our processing chain: first, level 2A water reflectances were produced with the atmospheric correction (AC) algorithm “Glint Removal for Sentinel-2 like data” (GRS) (Harmel et al., 2018), second, water reflectances images were masked based on watermask and cloudmask computed with sentinel-hub’s “s2clouless” for S2/MSI, and the original cloudmasks for Landsat 8. To evaluate the quality and representativeness of the satellite products, matchup comparisons were performed.
Our results demonstrate that, in certain environments and circumstances, sunglint signal can represent a major part of the water leaving reflectances. It can lead to a 10-fold bias on water quality estimations, and validate the importance of including a sunglint correction step, even though no consensus exists in the choice of a particular atmospheric correction algorithm (Pahlevan et al., 2021). Focusing on [chlo-a] retrieval, we implemented several widely used algorithms from literature, and adapted them to Landsat 8 and Sentinel 2 data. We calculated [chlo-a] following several modalities: (i) with the original paper’s calibration, (ii) after applying calibration to the region of interest, and (iii) with calibrations defined for each OWT by (Neil et al., 2019). We also implemented a spectral angle mapper method (SAM) to identify Optical Water Types (OWT hereafter) as defined by (Spyrakos et al., 2018), like it was recently implemented for MERIS by (Liu et al., 2021).
For altitude lakes in the Alps mountain chain, classified as clear oligotrophic waters, ocean-colour algorithm OC3 performs best, detecting low [chlo-a] in the range of 1 to 10 µg/l (MAE < 2.6 µg/L, RMSE < 4.8 µg/L, MAPE < 65 %, SSPB(signed bias) < 15%), which is comparable to recent neural-network algorithms. In meso- to eutrophic lakes, several algorithms performed satisfactorily such as red-fluorescence-, NDCI- or 2-3 bands- based algorithms, but with variable accuracies depending on sites. In a few lakes of Brittany and Aquitaine regions, optically classified as eutrophic to hypertrophic turbid lakes, performances are good enough to distinguish periods of blooms, and the shifts between ecological status from “moderate”, “poor” up to “bad”. Ranges of water quality parameters associated with the OWT classes as defined in (Spyrakos et al., 2018) are also in good agreement with the in situ data observed on our sites. Moreover, a matching score derived from SAM and OWT classes was implemented to measure similarity with OWT shapes. Matching scores will soon guide the choice of the best-suited algorithm. This matching score was also shown to be complementary to cloud and water masks and provide further ability for masking out pixels that are likely impacted by contributions from upwelling light of the bottom, adjacency effect from the shores, or badly-masked clouds.
This work, whilst ongoing, showed that spectral identification performs well with high resolution satellite data and is useful to optimize algorithm selection. Reasoning by analogy on optical types, we expect to successfully use OWT classification to retrieve [chlo-a] and other parameters on lakes where in situ data is not available. The perspective of this study is to proceed with a census of the ecological states of the French lakes. This can be foreseen as a crucial step to help respect the WFD engagements since field data is scarce or even absent for many sites included in the WFD.
References
Giardino, C., Brando, V.E., Gege, P., Pinnel, N., Hochberg, E., Knaeps, E., Reusen, I., Doerffer, R., Bresciani, M., Braga, F., Foerster, S., Champollion, N., Dekker, A., 2019. Imaging Spectrometry of Inland and Coastal Waters: State of the Art, Achievements and Perspectives. Surv Geophys 40, 401–429.
Harmel, T., Chami, M., Tormos, T., Reynaud, N., Danis, P.-A., 2018. Sunglint correction of the Multi-Spectral Instrument (MSI)-SENTINEL-2 imagery over inland and sea waters from SWIR bands. Remote Sensing of Environment 204, 308–321.
Liu, X., Steele, C., Simis, S., Warren, M., Tyler, A., Spyrakos, E., Selmes, N., Hunter, P., 2021. Retrieval of Chlorophyll-a concentration and associated product uncertainty in optically diverse lakes and reservoirs. Remote Sensing of Environment 267, 112710.
Neil, C., Spyrakos, E., Hunter, P.D., Tyler, A.N., 2019. A global approach for chlorophyll-a retrieval across optically complex inland waters based on optical water types. Remote Sensing of Environment 229, 159–178.
Pahlevan, N., Mangin, A., Balasubramanian, S.V., Smith, B., Alikas, K., Arai, K., Barbosa, C., Bélanger, S., Binding, C., Bresciani, M., Giardino, C., Gurlin, D., Fan, Y., Harmel, T., Hunter, P., Ishikaza, J., Kratzer, S., Lehmann, M.K., Ligi, M., Ma, R., Martin-Lauzer, F.-R., Olmanson, L., Oppelt, N., Pan, Y., Peters, S., Reynaud, N., Sander de Carvalho, L.A., Simis, S., Spyrakos, E., Steinmetz, F., Stelzer, K., Sterckx, S., Tormos, T., Tyler, A., Vanhellemont, Q., Warren, M., 2021. ACIX-Aqua: A global assessment of atmospheric correction methods for Landsat-8 and Sentinel-2 over lakes, rivers, and coastal waters. Remote Sensing of Environment 258, 112366.
Papathanasopoulou, E., Simis, S., Alikas, K., Ansper, A., Anttila, S., Attila, J., Barillé, A.-L., Barillé, L., Brando, V., Bresciani, M., Bučas, M., Gernez, P., Giardino, C., Harin, N., Hommersom, A., Kangro, K., Kauppila, P., Koponen, S., Laanen, M., Neil, C., Papadakis, D., Peters, S., Poikane, S., Poser, K., Pires, M.D., Riddick, C., Spyrakos, E., Tyler, A., Vaičiūtė, D., Warren, M., Zoffoli, M.L., 2019. Satellite-assisted monitoring of water quality to support the implementation of the Water Framework Directive, White paper. Eomores.
Spyrakos, E., O’Donnell, R., Hunter, P.D., Miller, C., Scott, M., Simis, S.G.H., Neil, C., Barbosa, C.C.F., Binding, C.E., Bradt, S., Bresciani, M., Dall’Olmo, G., Giardino, C., Gitelson, A.A., Kutser, T., Li, L., Matsushita, B., Martinez-Vicente, V., Matthews, M.W., Ogashawara, I., Ruiz-Verdú, A., Schalles, J.F., Tebbs, E., Zhang, Y., Tyler, A.N., 2018. Optical types of inland and coastal waters: Optical types of inland and coastal waters. Limnol. Oceanogr. 63, 846–870.
Worldwide freshwater systems are impacted by climate warming and anthropogenic forcing, influencing water level and runoff regimes via changes in precipitation and landuse patterns. Especially for river-connected lake systems these rapid changes might have far reaching consequences where inland nutrient loading might accumulate along the river system and finally lead to destabilization of distant ecosystems like estuaries. Thereby lakes, through their influence on flow regime, might play a critical role in how much and how far local eutrophication events will be transported along the river network. Currently, studies on river-connected lake systems are scarce and largely based on data with both low temporal and spatial resolution. Furthermore existing meta-ecosystem theory rarely takes lake-to-lake connectivity into account. In this study, we modeled how local nutrient input influences phytoplankton and how both propagate along strong or weak connected lakes. These theoretical investigations were accompanied by an extensive field study on lakes located along the Upper Havel-river system in Northern Germany including shallow and deep lakes and covering various flow regimes. We investigated effects of local nutrient loading on regional-scale plankton development along river-connected lake chains. To achieve high temporal and spatial resolution, we measured water constituents combining automated in-situ probes with ground-based, space- and airborne reflectance measurements. The field data show that upstream nutrient input drove phytoplankton development along the entire lake chain due to tight hydrological linkage. Our results suggest that similar point sources can result in profoundly different maximum intensity, spatial range and regional-scale magnitude of eutrophication impacts in lake chains dependent on flow regime and lake characteristics. We highlight the potential of combining in-situ measurements with remote sensing to improve lake meta-ecosystems monitoring.
Inherent Optical Properties (IOPs), such as absorption and scattering, link the biogeochemical composition of water and the Apparent Optical Properties (AOPs) obtained from satellites, including remote sensing reflectance (Rrs). The so-called optical closure analysis between radiometrically-measured AOPs and simulated AOPs from measured IOPs and light-field boundary conditions is crucial for assessing and, ideally, minimizing the uncertainties associated with AOP-to-IOP inversion algorithms. However, this step is complicated due to several factors, e.g., the unknown bias and random errors in the individual measurements, limitations in the sampling of the Volume Scattering Function (VSF) and fluorescence emission, and uncontrolled environmental effects, causing uncertainties in the AOPs and water constituents retrieval.
In this study, we used in-water bio-optical data acquired by an autonomous profiler (WetLabs Thetis) several times a day, as well as Sentinel-3 OLCI radiance and reflectance products to quantify, characterize, and mitigate the uncertainty of Rrs estimates. Various bio-optical sensors, as well as a Conductivity-Temperature-Depth (CTD) probe, are mounted on this profiler. Hyperspectral downwelling irradiance and upwelling radiance (Satlantic HOCR; 189 channels between 300-1200 nm), hyperspectral absorption and attenuation (AC-S; 81 channels between 400-730 nm), backscattering at 440, 532, 630 nm at 117° (ECO Triplet BB3W), as well as backscattering at 700 nm at 117° and Chlorophyll-a fluorescence (ECO Triplet BBFL2w) measured at an offshore research platform in Lake Geneva (Switzerland/France), called LéXPLORE (https://lexplore.info/), were used to address the scientific objectives. The in situ dataset includes 294 high vertical resolution daily profiles for the period between 10/2018 and 5/2020. The quasi-concurrent Sentinel-3 data (within ±2 hr of the in situ measurements) were used to assess the performance of the proposed uncertainty characterization and mitigation. The POLYMER atmospheric correction was used to obtain Rrs. We tested two bio-optical models available in POLYMER: (i) the globally optimized model by Garver, Siegel and Maritorena (GSM01), and (ii) the model proposed by Park and Ruddick (PR05). 41 and 31 matchups are available for the GSM01 and PR05 models, respectively.
The Hydrolight (HL) radiative transfer model was employed to obtain Rrs from the measured IOP profiles. We used a combination of different metrics based on the residuals of IOP-derived and radiometrically-measured Rrs to quantify and characterize the optical closure. Using the raw IOP profiles, our closure study indicated 33% of profiles with both error and bias of < 15% (i.e., good closure), and 18% with an error or bias of > 30% (i.e., poor closure). We then investigated the effect of scattering corrections for AC-S measurements, which only slightly improved the results (38% good and 21% poor closure). Next, we evaluated a simple single-step backscattering ratio (Bp) optimization method based on Rrs residuals, which significantly improved the Rrs optical closure (99% good, and 0% poor closure). The resulting optimized Bp shows a plausible seasonal variation ranging from ~0.005 during winter to ~0.024 during the end of spring and the beginning of summer. Our study confirmed that Bp, or more generally the VSF, is the most sensitive parameter in estimating AOPs from IOPs.
We further investigated the effects of uncertainty characterization (i.e., profile clustering) and uncertainty mitigation (e.g., IOPs correction) on the in situ-derived and Sentinel-3 Rrs matchup analysis. The latter showed a similar pattern to pure in situ analyses, i.e., slight enhancement using AC-S scatter-corrected profiles, and recognizable improvement implementing backscattering optimization. To avoid any overfitting by the backscattering optimization, the AC-S scatter-corrected profiles were used for investigating the effect of profiles clustering on matchup analysis. The results revealed that the uncertainty clustering based on the in situ profiler optical closure exercise can be used for Sentinel-3 matchup analysis, i.e., profiles with good closure indicated better performances based on different metrics as compared with poor closure. The satellite-derived Rrs using both PR05 and GSM01 models showed similar patterns in analyzing the effect of uncertainty characterization and mitigation with only slightly better results employing PR05.
Ultimately, we used profiles with good optical closure in other wavelengths to estimate phytoplankton fluorescence quantum yield in the emission region (670-700 nm). By relating these estimates to irradiance and pigment concentration, we managed to derive realistic diurnal estimates of non-photochemical quenching (NPQ) across the euphotic layer. In doing so, we can explain the limitations of fluorescence-based Chlorophyll-a retrieval algorithms for oligo- to mesotrophic lakes, and characterize the impact of photoinhibition on daily integrated primary production estimates.
Our results, in general, highlight the potential of using autonomous optical profiling as an alternative for automated ground-truthing of AOPs, with the added value of simultaneous IOP measurements. Further research is needed to investigate if improved VSF measurements and hence better estimates of Bp, or the consideration of full polarization in radiative transfer simulations enable improved conclusions from optical closure assessments.
Remote sensing can provide valuable information for monitoring the ecological status of inland waters. However, due to the optical complexity of lakes and rivers, quantifying water quality parameters is challenging. One approach is to use remotely-sensed reflectance to classify inland waters into discrete classes – or Optical Water Types - that correspond to different ecological states. These optical classes can then be used either to inform the selection of the most appropriate water quality retrieval algorithms or as valuable ecological indicators in their own right.
This review aimed to understand how remote sensing has been used to classify the ecological status of inland waters and which classification approaches are most effective, as well as identifying research gaps and future research opportunities. Using a systematic mapping methodology, a search of three large literature databases was conducted. The search identified an initial 174 articles, published between January 1976 and July 2021, which was reduced to 64 after screening for relevance.
Very few papers were published before 2008 but since then publications increased substantially. The number of waterbodies included in the studies ranged from one to more than 1000, with the vast majority of studies including five or fewer waterbodies. There was a geographical bias towards Europe, the US and China, with poor representation across Africa and the rest of Asia. The source of spectral data used for training the classifications was overwhelmingly from satellite data or in situ measurements, with relatively few using data from aircraft or UAVs. The most common satellite sensors used were the Landsat series, MERIS, MODIS, Sentinel-2 MSI and Sentinel-3 OLCI.
The classification frameworks used were primarily based on Optical Water Types or Trophic State Index, but many studies adopted their own bespoke classification schemes. The number of classes varied from 2 to 21, peaking at 3 classes. A variety of classification algorithms were utilised including unsupervised clustering, supervised (parametric and machine learning) methods, and thresholding of spectral indices. Most studies related the optical classes to in situ water parameters, particularly Chlorophyll-a, Total Suspended Solids and Coloured Dissolved Organic Matter. A variety of pre-processing steps were applied prior to classification including normalisation of spectral data and dimensionality reduction techniques such as Principal Component Analysis.
In this presentation, we summarise the strengths and limitations of different sensors, pre-processing methods and classification algorithms for optical classification of inland waters. Our results highlight important gaps, such as the geographical bias in studies and training data. We emphasize the need for greater transparency and sensitivity analysis to understand how decisions made about the choice of sensor, classification algorithm and pre-processing steps influence to resulting optical classes. Recommendations for future research are presented, including the need for standardized approaches to support transferability of methods and scaling up from local to global scales.
Freshwaters play a significant role in the global carbon cycle by degassing large carbon fluxes. It is established that most of this carbon emitted to the atmosphere comes from organic matter degradation during transport and storage in rivers and lakes. This is particularly true for freshwaters in tropical context such as Petit-Saut reservoir (365 km²) in French Guiana, with huge inputs of terrestrial organic matter (litter and drowned forest), high temperatures and humidity (both being aggravating factors of the degradation).
Knowledge about spatial distribution and temporal evolution of dissolved (and particulate) organic carbon (resp. DOC and POC) in this reservoir and its tributaries is fundamental for a better understanding of degassing mechanisms and estimation of GHG emissions. Hence, we tested the potentialities of high spatial resolution multispectral satellite imagery (Sentinel-2 and Landsat 8) for monitoring DOC concentrations in these absorbing tropical waters, using the absorption coefficient of the coloured dissolved organic matter (aCDOM) as a proxy.
Optical properties (aCDOM and above water remote sensing reflectance (Rrs)) as well as water quality measurements (DOC, POC, total suspended matter, chlorophyll-a, etc) were carried out at 25 stations evenly distributed over the entire lake. CDOM absorption was the highest at the mouth of the main tributary (Sinnamary river) and the lowest in the pelagic area, near the dam.
Simulated satellite spectra were computed by convoluting in situ hyperspectral data with the spectral response function of the given satellite sensor (Sentinel-2/MSI or Landsat 8/OLI), and have been compared to atmospherically corrected satellite data. We used several atmospheric correction algorithms (ACOLITE, C2RCC, C2X, C2X-COMPLEX, GRS, iCOR, LaSRC, Sen2Cor) and resulted spectra were highly heterogeneous (depending on the method used), and poorly correlated with in situ spectra. We explain these limited performances by environmental factors, such as the presence of absorbing aerosols (e.g., N2O) or strong adjacency effects (IOCCG, 2018) that are sill hardly resolved by atmospheric correction methods. According to the ACIX-AQUA exercise (Pahlevan et al., 2021), it is indeed not uncommon that atmospheric correction processors failed to retrieve realistic water reflectance in very absorbing waters surrounded by dense vegetation – typically the case of Petit-Saut reservoir, located within the Amazon rainforest.
We tested several semi-empirical and semi-analytical algorithms from the literature to estimate aCDOM at 440 nm ( aCDOM(440) ) from multispectral data. We also designed an empirical algorithm based on Sentinel-2 bands B3 to B5, performances of atmospheric correction processors being reasonable on this part of the spectrum. Even though the retrieval of aCDOM in the absorbing black waters of Petit-Saut remains challenging, most of these recalibrated algorithms seem to be robust enough to variable concentrations of total suspended matter, and provide satisfactory results over the entire range of aCDOM observed during our campaign.
In order to retrieve DOC concentration from remote sensing data, a linear relationship between aCDOM(440) and DOC has been defined and suggest that aCDOM(440) can be used as an efficient tracer to estimate DOC in most of Petit-Saut waters. However, points located in main tributaries or their transition zones do not follow the same relationship, which is known to be water body or river specific (Valerio et al., 2018).
To summarise, we were able to estimate DOC in these tropical black waters from simulated satellite spectra, but there are still several challenging issues to overcome before being able to do it from space – atmospheric correction being the first order source of uncertainty associated with aCDOM estimation in such highly absorbing environments. New measurement campaigns will be conducted i) to enrich our dataset and precise the optical properties of tributaries and transition zones, ii) to study possible seasonal or inter-annual variation of the aCDOM(440)–DOC relationship (Del Castillo, 2005) and iii) to better constrain atmospheric corrections by having in situ AOT measurements.
When the above-mentioned limitations will be overcome, our next objective will be to produce time series of DOC concentrations from Sentinel-2 and Landsat 8 archives. It will help to characterize the spatial and temporal distribution of organic carbon in the area, which is useful to better comprehend organic matter degradation processes and dynamics. Ultimately, this will benefit both public authorities and Électricité de France (the dam manager) in their management of the dam and its reservoir.
References
Del Castillo, C.E., 2005. Remote Sensing of Organic Matter in Coastal Waters, in: Miller, R.L., Del Castillo, C.E., Mckee, B.A. (Eds.), Remote Sensing of Coastal Aquatic Environments: Technologies, Techniques and Applications, Remote Sensing and Digital Image Processing. Springer Netherlands, Dordrecht, pp. 157–180. https://doi.org/10.1007/978-1-4020-3100-7_7
IOCCG (2018). Earth Observations in Support of Global Water Quality Monitoring. Greb, S., Dekker, A. and Binding, C. (eds.), IOCCG Report Series, No. 17, International Ocean Colour Coordinating Group, Dartmouth, Canada.
Pahlevan, N., Mangin, A., Balasubramanian, S.V., Smith, B., Alikas, K., Arai, K., Barbosa, C., Bélanger, S., Binding, C., Bresciani, M., Giardino, C., Gurlin, D., Fan, Y., Harmel, T., Hunter, P., Ishikaza, J., Kratzer, S., Lehmann, M.K., Ligi, M., Ma, R., Martin-Lauzer, F.-R., Olmanson, L., Oppelt, N., Pan, Y., Peters, S., Reynaud, N., Sander de Carvalho, L.A., Simis, S., Spyrakos, E., Steinmetz, F., Stelzer, K., Sterckx, S., Tormos, T., Tyler, A., Vanhellemont, Q., Warren, M., 2021. ACIX-Aqua: A global assessment of atmospheric correction methods for Landsat-8 and Sentinel-2 over lakes, rivers, and coastal waters. Remote Sensing of Environment 258, 112366. https://doi.org/10.1016/j.rse.2021.112366
Valerio, A. de M., Kampel, M., Vantrepotte, V., Ward, N.D., Sawakuchi, H.O., Less, D.F.D.S., Neu, V., Cunha, A., Richey, J., 2018. Using CDOM optical properties for estimating DOC concentrations and pCO 2 in the Lower Amazon River. Optics Express 26, A657. https://doi.org/10/gd8zmb
The use of drones to monitor water quality in inland, coastal and transitional waters is relatively new. The technology can be seen as complementary to satellite and in-situ observations. While cloud cover, low revisit times or insufficient spatial resolution can introduce gaps in satellite-based water monitoring programs, airborne drones can fly under clouds at preferred times capturing data at cm-resolution. In combination with in-situ sampling, drones provide the broader spatial context and can collected information in hard-to-reach areas.
Although drones and lightweight cameras are readily available, deriving water quality parameters is not so straightforward. It requires knowledge of the water optical properties, the atmospheric contribution and special approaches for georeferencing of the drone images. Compared to land applications, the dynamic behavior of water bodies excludes the presence of fixed reference points, useful for stitching and mosaicking and the images are sensitive to sun glint contamination. We present a cloud-based environment, MAPEO-water, to deal with the complexity of water surfaces and retrieve quantitative information on the water turbidity, the chlorophyll content and the presence of marine litter/marine plastics.
MAPEO-water supports already a number of camera types and allows the drone operator to upload the images in the cloud. MAPEO-water also offers a protocol to perform the drone flights and allow efficient processing of the images from raw digital numbers into physically meaningful values. Processing of the drone images includes direct georeferencing, radiometric calibration and removal of the atmospheric contribution. Final water quality parameters can be downloaded through the same cloud platform. Water turbidity and chlorophyll retrieval are based on spectral approaches utilizing information in the visible and Near Infrared wavelength ranges. Drone data are complementary to both satellite and in-situ data. Marine litter detection combines spectral approaches and Artificial Intelligence. Showcases including satellite, drone and in-situ observations will demonstrate the complementary of all three techniques.
WQeMS is a consortium of 11 partners spread all over Europe: Centre for Ecological Research and Forestry Applications (CREAF) (Spain), EOMAP GMBH & CO KG (EOMAP ) (Germany), Cetaqua, Centro Tecnológico del Agua, Fundación Privada ( CETAQUA) (Spain), Autorita' Di Bacino Distrettuale Delle Alpi Orientali ( AAWA ) (Italy), Serco Italia SpA (SERCO) (Italy), Thessaloniki Water Supply and Sewerage Company SA (EYATH SA) (Greece), Engineering - Ingegneria Informatica S.p.A (ENG) (Italy), Finnish Environment Institute (SYKE) (Finland), Phoebe Research and Innovation Ltd (PHOEBE) (Cyprus), Empresa Municipal de Agua y Saneamiento de Murcia, S.A ( EMUASA) (Spain). These organizations cooperate to offer cutting-edge EO technology by creating the ‘Copernicus Assisted Lake Water Quality Emergency Monitoring Service’ (WQeMS) Research and Innovation Action H2020 project. WQeMS aims to provide an open surface Water Quality Emergency Monitoring Service (https://wqems.eu/) to the water utilities’ industry leveraging on the Copernicus products and services. Target is the optimization of the use of resources by gaining access to frequently acquired, wide covering and locally accurate water-status information. Citizens will gain a deeper insight and confidence for selected key quality elements of the ‘water we drink’, while enjoying a friendlier environmental footprint.
There are four services offered by WQeMS. Two services are related to slow developing phenomena, such as geogenic or anthropogenic release of potentially polluting elements through the bedrock or pollutants’ leaching in the underground aquifer through human activities. The other two services are related to fast developing phenomena, such as floods spilling debris and mud or chemicals/ oils spills or algal bloom and potential release of toxins by cyanobacteria at a short time interval bringing sanitation utilities at the edge of their performance capacity. Furthermore, an alerting module is being developed that will deliver alerts about incidents, derived from the WQeMS and Twitter data harvesting module, while a training set of activities will allow users and end-users to gain insight and familiarity with Copernicus Services and the ability to understand and use the functionality of the WQeMS.
The WQeMS system will enable the optimization of the use of resources by gaining access to frequently acquired, wide covering and locally accurate water-status information. WQeMS will generate knowledge that shall support existing decision support systems (DSSs) in a syntactically and semantically interoperable manner. A wide set of parameters will be provided that are useful for the quality assessment of raw drinking water, as captured by existing and emerging requirements of the water utilities industry. It will be based on a modular architecture composed of three main layers: frontend, middleware and backend. Adding to the aforementioned, it will promote further alignment of existing decision support and implementation chains with the updated Drinking and Water Framework Directives.
WqeMS relies on the Copernicus Data and Information Access Services (e.g. DIAS ONDA) for data provision; aiming also at connection with further exploitation platforms. The overall system will be hosted on the ONDA DIAS Cloud infrastructure and will be designed as a microservice, container-based architecture. The cloud nature of the platform, together with the microservice architecture of the solution, provide multiple benefits to the critical objective of the current project, such as (i) Data availability, (ii) Fault Tolerance, (iii) Data Interoperability and (iv) Scalability. The decision to host WQeMS on ONDA will allow having location-based applications taking advantage of proximity of the data.
The objective is to generate an outcome at the end of the project that best suits the interests of the users and citizens, while also enabling compatibility, synergy and complementarity with existing infrastructure and services. Main ambition is to receive approval by the Member States to be embedded in the existing Copernicus Services portfolio. Activities and results are expected to contribute to Europe's endeavors towards GEO and its priorities in the framework of the UN 2030 Agenda for Sustainable Development, the Paris Climate Agreement and the Sendai Framework for Disaster Risk Reduction. WQeMS components, structure and progress are presented and discussed.
This project has received funding from the European Union’s Horizon 2020 Research and Innovation Action program under Grant Agreement No 101004157
Dissolved organic matter (DOM) is important for the function of aquatic ecosystems and can be used as a representation of the lake’s metabolome. DOM can enter an aquatic system via the runoff of the rainfall (or melting tundra) over the ecosystem’s watershed or from in water algal or microbial production. DOM optically detectable fraction – the colored dissolved organic matter (CDOM) – is often used as a proxy for dissolved organic matter. CDOM absorbs radiation in the ultraviolet and visible region of the spectrum and can be identified from satellite imagery. It originates from the degradation of plant materials and other organisms or from terrestrially imported substances. The source of CDOM is important information to understand environmental-driven dynamics in aquatic systems. Fluorescence spectroscopic techniques, such as the excitation–emission matrix (EEM) and the parallel factor analysis (PARAFAC), have been used to distinguish between allochthonous (humic-like) and autochthonous (protein-like) sources. Here, we assess the relationship between these fluorescent components and optical properties such as remote sensing reflectance and inherent optical properties (IOPs) in 19 lakes located within the Mecklenburg–Brandenburg Lake district in the North German Lowland. These lakes differ in size, shape, depth, trophic state and biogeochemical characteristics. Most lakes are connected in-series by rivers and natural or manmade channels. Water samples from these lakes were analyzed for absorbance and fluorescence measurements by using spectrophotometers. These samples were also used for the computation of the absorption coefficients of phytoplankton, CDOM, and non-algal particles, which were used as the IOP dataset. Remote sensing reflectance was calculated from radiometric measurement at the water surface using two handheld spectroradiometers (ASD, JETI). We started calculating 2 PARAFAC components yielding in a high correlation between both (Spearman’s rs of 0.88) indicating that it is difficult to differentiate these two components. Calculating 4 PARAFAC components, we observed a high correlation between component 1 and 2 (Spearman’s rs of 0.98) and between component 1 and 3 (Spearman’s rs of 0.87). Component 4 was the least correlated with Spearman’s rs of 0.23, 0.25 and 0.09 with components 1, 2 and 3 respectively. This indicates that it would be possible to differentiate components 1, 2 and 3 from component 4. The 2-dimensional correlation plot with the remote sensing reflectance and each component showed that for components 1, 2 and 3 the reflectance ratio at wavelengths 620 nm/590 nm was the most appropriate, while for component 4 the reflectance ratio was most appropriate at 825 nm/665 nm. In relation to the IOPs, the correspondence analysis showed that components 1, 2 and 3 are related to the absorption coefficient of CDOM and the absorption coefficient of non-algal particles while component 4 was related to the absorption coefficient of CDOM and the absorption coefficient of phytoplankton. These results indicate that components 1, 2 and 3 are related to the allochthonous CDOM while component 4 seems to be related to the autochthonous CDOM. Additionally, it also shows the potential of remote sensing for the identification CDOM sources which can help to understand aquatic ecosystem dynamics under environmental change.
MAPAQUALI - Customizable modular platform for continuous remote sensing monitoring of aquatic systems
The water resource is vital, not only for maintaining life on Earth, but also for supporting economic development and social well-being as the sustainable growth of all the nations depends upon water availability. Approximately 12% of the planet's surface fresh water available for use circulates through the Brazilian territory. Due to this water availability, Brazil has an extensive number of large artificial and natural aquatic ecosystems. This water availability places Brazil in a privileged position, but it also poses a great challenge for sustainable use and monitoring of these natural resources. For instance, the nutrient inflows to lakes and hydroelectric reservoirs from irrigated agriculture and sewage from nearby cities significantly contribute to the eutrophication process and the systematic occurrence of cyanobacterial blooms. These blooms can be harmful and produce toxins that lead to a series of public health problems. Even when not harmful, they impair fisheries and the recreational use of those water bodies. These environmental impacts on aquatic ecosystems need to be determined and monitored, mainly in reservoirs, as energy sources, besides being renewable, must be clean. This study summarizes the integrated effort of specialists in hydrological optics, aquatic remote sensing, and computer science to build a customizable modular platform, named MAPAQUALI. The platform allows a continuous monitoring of aquatic ecosystems based on satellite remote sensing, and the integration of bio-optical models derived from in-situ measurements. The platform will generate and make available, for aquatic ecosystems for which it is customized, a spatiotemporal information about water quality parameters: Chlorophyll-a, Cyanobacteria, Total Suspended Solids, Secchi disk depth, diffuse attenuation coefficient (Kd), and bloom events alerts (especially cyanobacteria).
The MAPAQUALI platform comprises the following modules: Data Pre-processing; Bio-optical Algorithms; Query and View WEB.
The Data Pre-processing Module (DPM) generates and catalogs Analysis Ready Data (ARD) [10.1109/IGARSS.2019.8899846] collections, which are input data for the Bio-optical Algorithms Module (BAM) for water quality products generation. The DPM has data acquisition, processing and cataloging functionalities. The DPM structure is flexible for adding new processing tasks or even new functionalities. The following processing tasks are available in the current implementation of MAPAQUALI: query and image acquisition from data providers (Google Cloud Platform or Brazil Data Cube Platform [10.3390/rs12244033]); atmospheric correction procedure with 6SV model; water bodies’ identification and extraction; cloud and shadow masking; sunglint and adjacency corrections. The BAM comprises parameterized/calibrated/validated algorithms using Brazilian inland water in situ bio-optical datasets (LabISA – INPE), and OLI, MSI, and OLCI simulated spectral bands. Algorithms were parameterized for OLCI sensor only for aquatic systems having the suitable size, such as large lakes in the Amazon floodplain. In addition, to ensure the best possible accuracy, we developed a semi-analytical [10.1016/j.isprsjprs.2020.10.009; 10.3390/rs12172828], hybrid [10.3390/rs12010040], machine learning [10.1016/j.isprsjprs.2021.10.009], and empirical [10.3390/rs13152874] algorithms, using in situ data representative of the full range variability of the apparent and inherent optical proprieties. These algorithms are achieving accurate results. For example, the hybrid algorithms for Chl-a have an error of 20% (MAPE = 20%). Machine learning algorithms, for estimating water transparency, presented errors of approximately 25%. Moreover, Kd algorithm for oligotrophic reservoir resulted in errors of 20%. The Query and View WEB is a web portal providing resources for searching for the aquatic systems integrated into the platform and returns the products available for each of them.
Tools for viewing and analyzing time series are available in this module. Registered users to the platform can freely download products and images from the ARD archive. Additionally, users can consume any available data through our web geoservices enabled database, such as Web Map Service (WMS) or Web Feature Service (WFS).
For the integrated application of DPM and BAM processing tasks, we are using the process orchestration infrastructure available on the Brazil Data Cube Platform [10.3390/rs12244033]. In this way, MAPAQUALI platform can perform all operations periodically, which allows continuous monitoring of the aquatic systems under consideration. At the end of each execution, the newly generated data products are cataloged, and made available for consulting in monitoring activities.
In our ongoing efforts, we are customizing water quality bio-optical algorithms to four aquatic ecosystems: two multi-user reservoirs, a set of lower amazon floodplain lakes, and one nearshore coastal water. As modular and customizable, others aquatic ecosystems can be easily inserted into the platform.
Validating water quality model applications is challenging due to data gaps in in-situ observations, especially in developing regions. To such a challenge, remote sensing (RS) has provided an alternative to monitor the water quality of inland waters due to its low cost, spatial continuity and temporal consistency. However, limited studies have exploited the option of validating water quality model outputs with RS water quality data. With sediment loadings regarded as a threat to the turbidity and trophic status of Lake Tana in Ethiopia, this study aims at using existing RS lake turbidity data to validate the seasonal and long-term trends of sediment loadings in and out of Lake Tana. A hydrologically calibrated SWAT+ model is used to simulate river discharge and sediment loadings flowing in and out of Lake Tana basin. Together, with a remote sensing dataset of lake turbidity from Copernicus Global Land Service (CGLS), seasonal and long term correlations between lake turbidity and sediment loadings at the river mouths of Lake Tana are estimated.
Results indicate a strong positive correlation between sediment load from inflow and out flow rivers with RS lake turbidity (r2 > 0.7). Other strong positive relations were observed between the stream flow from inflow rivers and the lake turbidity (r2 > 0.5). These indicate that river streamflow accounted for significant responses in river sediment loads and lake turbidity which likely occurred from a combination of overland transport of sediment into streams due to erosion of the landscape, scouring of streambanks, and resuspension of sediment from channel beds. We conclude that RS water quality products can potentially be used for validating seasonal and long term trends in simulated SWAT+ water quality outputs, especially in data scarce regions.
Satellite retrieval and validation of bio-optical water quality products in Ramganga river, India
Veloisa Mascarenhas1*, Peter Hunter1, Matthew Blake1, Dipro Sarkar2, Rajiv Sinha2, Claire Miller3, Marion Scott3, Craig Wilkie3, Surajit Ray3, Andrew Tyler1
* veloisa.mascarenhas@stir.ac.uk
1University of Stirling, UK
2Indian Institute of Technology Kanpur, India
3University of Glasgow, UK
In addition to water resources, inland waters provide diverse habitats and ecosystem services. They are threatened however, by unregulated anthropogenic activities and so effective management and monitoring of these vital systems has gained increasing attention over the recent years. Being optically complex waters, inland water remote sensing continues to face challenges underpinning the retrieval of physical and biogeochemical properties. We present here the retrieval and assessment of satellite derived L2 bio-optical water quality products from Sentinel2 and Planet satellites for a highly turbid river system. Bio-optical water quality products including remote sensing reflectance, total suspended matter and chlorophyll-a (Chl-a) concentrations are validated using in situ observations along the river Ramganga, in India. The Ramganga has a large (22,644 km2) diverse catchment, with intensive agriculture, extensive industrial development and a rapidly growing population. The over-abstraction of both surface and groundwater, and pollution due to industrial and domestic waste, mean the Ramganga presents an ideal case study to demonstrate the value of satellite data for monitoring water quality in a highly impacted river system. For the case study, five different atmospheric correction methods are tested in processing the Level 1 Sentinel 2 imagery and a set of biogeochemical algorithms to estimate bio-optical products. Additional bio-optical products such as turbidity are estimated from satellite derived remote sensing reflectance to be matched with in situ turbidity observations. The Sentinel dataset is supplemented using high resolution (3-5 m) imagery from commercial satellite, Planet, processed using ACOLITE atmospheric correction method. The river transect is characterised by high variability in optically active constituents and remote sensing reflectance. Around the Moradabad area, in situ measured turbidity values peak during the month of July while Chl-a concentrations are observed to be highest in early May.
Quantum yield of fluorescence (ϕ_F) represents the small fraction of absorbed photons in phytoplankton that is converted to sun-induced fluorescence (SIF). This fraction is typically up to 2% in optically complex waters. All other absorbed photons are either used for photochemistry in the reactions centers or dissipated as heat. When fluorescence is reduced from a maximum level due to an increase in open reaction centers, Photochemical Quenching (PQ) occurs. Other forms of fluorescence reduction lead to increased thermal dissipation and are referred to as Non-photochemical Quenching (NPQ). In cases where NPQ is minimal, ϕ_F and SIF increase with higher irradiance. However, when NPQ is present due to photo-inhibition or protective measures employed by the phytoplankton, SIF may still increase with irradiance while ϕ_F decreases. Consequently, NPQ conditions also lead to lower quantum yield of photosynthesis.
Knowing the ϕ_F is key to understanding SIF emission in phytoplankton as it enables us to interpret the dynamics of SIF in relation to PQ or NPQ. Disentangling PQ from NPQ allows us to use SIF estimates in various applications in aquatic optics and remote sensing such as accurate estimation of chlorophyll-a concentration (chl a) or modelling of primary productivity. These are essential to assess the water quality status of surface waters and to understand the dynamics of aquatic ecosystems. Retrieving and interpreting SIF becomes more plausible at the present time and in the near future with the increasing availability of in-situ, airborne and spaceborne hyperspectral sensors. However, obtaining ϕ_F is challenging due to prior data necessary for the calculations especially in inland waters.
Using the autonomous Thetis profiler from the LéXPLORE platform in Lake Geneva, we demonstrate a novel way of estimating ϕ_F based on an ensemble of in-situ profiles of Inherent Optical Properties (IOPs) and Apparent Optical Properties (AOPs) taken between October 2018 and August 2021. In particular, we exploited the profiler’s hyperspectral radiometers to obtain upwelling radiances and downwelling irradiances in the top 50 m of the water column. These AOPs were the main basis of our SIF retrieval, representing natural variations in fluorescence emission under different bio-geophysical conditions. We further used hyperspectral absorption and attenuation, and backscattering measurements at discrete wavelengths to obtain the water’s IOPs. These IOPs were used in radiative transfer model simulations assuming ϕ_F=0 to obtain a second set of AOPs without fluorescence contributions. The measured and simulated reflectances obtained outside the fluorescence emission region which satisfy the optical closure analysis were kept in the succeeding steps. By associating the difference between these measured and simulated AOPs, known chlorophyll-a concentrations and IOPs, we obtained estimates of ϕ_F.
We analysed obtained ϕ_F values to determine the conditions at which NPQ occurs. Consequently, we evaluated the vertical and temporal changes in ϕ_F. We observed diurnal changes in NPQ occurrence, particularly during clear sky conditions where downwelling irradiance changes significantly throughout the day. For instance, we observed that ϕ_F can be up to 65% lower when NPQ is activated compared to PQ stimulated conditions. While downwelling irradiance is a significant contributor to changes in ϕ_F, its role can be sometimes not easily interpreted because the threshold of radiant flux at which NPQ is activated in inland waters is not consistent. Other factors such as phytoplankton photo-adaptation and the composition of different phytoplankton communities also play significant roles in understanding phytoplankton response to incident light and therefore, quenching mechanisms. Our results contribute insight on the nature of SIF and can facilitate activities to assimilating SIF and ϕ_F estimates in remote sensing algorithms, which would aid us in monitoring not only phytoplankton biomass but also the eco-physiological state of phytoplankton cells.
Algal blooming is one of the factors with the greatest impact on the quality, functioning, and ecosystem services of waterbodies, and can frequently occur in the coastal regions (O'neil et al., 2012). The observed increase in cyanobacterial blooms in European seas is attributed to severe eutrophication and a subsequent change in nutrient balance caused by anthropogenic nutrient enrichment, in particular from urban areas, agriculture and industry (Kahru et al., 2007, Vigouroux et al., 2021). The EU Marine Strategy Framework Directive (MSFD) is the main initiative to protect the seas of Europe that requires to minimize human-induced eutrophication (MSFD, 2008). The majority of indicators developed under MSFD Descriptor 5 Eutrophication are based on in situ monitoring data, and only recently, the Earth-Observation (EO) data has started to be proposed as a valuable source of information for monitoring, ecological status assessment and indicator development (Tyler et al., 2016). Recently, HELCOM proposed a pre-core indicator Cyanobacteria Bloom Index (CyaBI) that evaluates cyanobacterial surface accumulations and cyanobacteria biomass, describes the symptoms of eutrophication caused by nutrient enrichment, and exclusively is based on EO satellite data (Antilla et al., 2018). The indicator was developed using the Baltic Sea as a testing site, and is focused on the open sea areas (HELCOM, 2018). However, anthropogenic pressures, unbalanced and intensive land use, and climate change increasingly affect coastal and transitional waters representing water continuum from inland waters towards sea. These regions are more exposed to ongoing eutrophication and the severe cyanobacteria blooms are evident (Vigouroux et al., 2021). Therefore, the aim of this study is to test the applicability of pre-core indicator CyaBI for the coastal and transitional waters of the two enclosed seas located at different latitudes: the Baltic and the Black Sea. We also hypothesize that the intensive cyanobacteria blooms significantly alter the short-term environmental conditions of the Seas in terms of the Sea Surface Temperature (SST) changes.
The Baltic and the Black Sea are the world’s largest brackish water ecosystems, which exhibit many striking similarities as geologically young post-glacial water bodies, semi-isolated from the ocean by physical barriers. Both Seas are exposed to similar anthropogenic pressures, such as increasing urbanization, water pollution by heavy industries, intense agriculture, overexploitation of fish stocks, abundant sea traffic and port activities, oil spills, etc. In both seas, increasing attention is being paid to the search for scientifically based solutions to improve the state of the marine environment.
In our study, we have used time series of the Medium Resolution Imaging Spectrometer (MERIS) on-board Envisat at 300 m, and the Ocean and Land Colour Instrument (OLCI) on-board Sentinel-3 at 300 m spatial resolution for the estimation of chlorophyll-a (Chl-a) concentration. Chl-a concentration was retrieved after the application of the FUB processor, which was developed by the German Institute for Coastal Research (GKSS), Brockmann Consult, and Freie Universität Berlin, and is designed for European coastal waters. In case of MERIS images, the FUB processor uses Level 1b top-of-atmosphere radiances to retrieve the concentrations of the optical water constituents. A good agreement (R2=0.69, RMSE=14.44, N=56) was found between Chl-a derived from MERIS images after application of FUB processor and in situ measured Chl-a concentration during the validation in the coastal waters of the Lithuanian Baltic Sea (more details in Vaičiūtė et al., 2012). Although the FUB processor is originally designed for MERIS images, we have tested its performance in case of OLCI images. Chl-a concentration derived from OLCI data after FUB processor application and measured in situ were in agreement with R2=0.72, RMSE=4.2, N=31. CyaBI index was calculated following the methodology described in Antilla et al. (2018). In this study, we used Terra/Aqua MODIS standard Level 2 SST products with a spatial resolution of around 1 km—obtained from the NASA OceanColor website—to analyse the spatial patterns and changes in SST at the presence of cyanobacteria surface accumulations.
In this presentation, we will demonstrate the first results of ecological status assessment using pre-core CyaBI indicator in the Lithuanian Baltic and Ukrainian Black Seas. We will discuss the potential of using CyaBI for the ecological status assessment in the coastal and transitional waters, and for the seas located at different latitudes. We also will provide significant insights about the integration of SST data for the ecological status assessment considering the Descriptor 5 of MSFD and the Water Framework Directive.
The research was funded by the Lithuanian-Ukrainian bilateral cooperation in the field of science and technology under project "Measuring the marine ecosystem health: concepts, indicators, assessments – MARSTAT (contract no. S-LU-20-1)".
Monitoring is an integral precondition to determine lakes' ecological status and develop solutions to restore lakes that have deteriorated from reference conditions. Spatial and temporal limitations of conventional in situ monitoring impede adequate evaluation of lakes' ecological status, especially when dealing with large-scale measurements. Sentinel-2 (S2) - a constellation of the two twin satellites, S2-A and S2-B with the MultiSpectral Instrument (MSI) on board can be a complement to in situ data. S2 MSI imagery makes possible investigation of even small water bodies due to its high spatial resolution of 10, 20 and 60 meters depending on the spectral band. Besides, S2 spectral resolution allows estimation of a wide range of water quality parameters such as chlorophyll-a (chl-a), water color, colored dissolved organic matter (CDOM), etc. However, implementing the remote sensing data for water quality assessment over small inland waters might be obstructed by the adjacency effect (AE). AE is especially strong in small, narrow, or complex-shape water bodies surrounded by dense vegetation and decreases further offshore. Therefore, the largest possible homogeneous water area surrounding the sampling point would increase the possibility to obtain an accurate signal from the water’s surface called water-leaving reflectance ρω(λ). Moreover, a combination of chl-a, CDOM and TSM concentrations also affect the probability and accuracy of ρω(λ) and must be considered.
Test sites of this study are optically complex lakes of Northern Europe with a high and varying amount of optically active substances. In this study, the dataset of 476 in situ measurements of water properties from 44 lakes were used. Measured concentrations of chl-a are ranged between 2 mg/m3-100 mg/m3, total suspended matter (TSM) 0.6 mg/m3 - 48 mg/m3 and aCDOM (442) 0.5 – 48 m-1. Water-leaving reflectance ρω(λ) was measured deploying above-water RAMSES TriOS radiometers.
The aim of this study was to evaluate the capabilities and limitations of the S2 MSI data after atmospheric correction by POLYMER 4.12 and C2RCC v1.5 processors. The results were analysed together with lakes’ area and shape complexity (shape index, SI) and the signal strength as determined by the concentration of chl-a, TSM and aCDOM.
The objectives of the study were:
1. Validate and analyze POLYMER and C2RCC-derived ρω(λ) against in situ measurements using match-up analysis for exact location (1 x 1), 3 x 3 and 5 x 5-pixel size region of interest (ROI).
2. Evaluate spatial distribution and homogeneity of POLYMER and C2RCC quality flags and water quality products. Based on that derive area and SI thresholds of the lakes that can be monitored with S2 (20 m spatial resolution).
3. Evaluate the spatial and temporal distribution of the failures in POLYMER and C2RCC atmospheric correction and in the resulting water quality maps. Analyze its impact on the derived ecological status class in optically different lakes.
The validation of POLYMER ρω(λ) product against in situ measurements resulted in slightly better accuracy than the C2RCC product. For the bands at 560 nm, 665 nm and 705 nm, crucial to derive chl-a over optically complex waters, POLYMER showed a weak correlation (R2 = 0.41, 0.12, 0.36) for 1 x 1 area, however, R2 for the 3 x 3 region was higher and equaled 0.63, 0.48 and 0.58, respectively. Noticeably, with enlarging the ROI up to 5 x 5 pixels grid, R2 decreased and equaled 0.36, 0.33 and 0.31 for 560 nm, 665 nm and 705 nm, respectively, which indicates nonhomogeneity in pixels distribution. Moreover, 5 x 5 ROI is 10000 m2 area, which might be too large to compare with the field measurement from only one point. The coefficient of determination for C2RCC data increased with the enlargement of the ROI to a 3 x 3 and 5 x 5-pixel area similar to POLYMER, however, not so noticeably. Specifically, for exact location (1 x 1 ROI), R2 equaled 0.45, 0.35, 0.40 at 560 nm, 665 nm and 705 nm wavebands, whereas for 3 x 3 and 5 x 5 pixel area it equaled 0.48, 0.41, 0.40 and 0.50, 0.38, 0.40, respectively.
POLYMER quality flags of the S2 imageries sensed in spring, summer and autumn over a group of 1727 lakes predominantly located in Southern Estonia were analyzed. In spring, most of the water bodies under 1 ha did not have valid quality flags. Besides, more complex-shape water bodies (SI > 2) with no valid quality flags were even larger (up to 6 ha). It was shown that the amount of quality flags, usable to produce water quality maps, decreases towards autumn.
Spatial and seasonal evaluation of the chl-a was conducted in the optically and geometrically different lakes. Failures in POLYMER atmospheric correction resulting in abnormal chl-a values were mostly due to the combined effect of optical properties of the water bodies and adjacency effect, strongest over clear waters surrounded by forest. This resulted in very few pixels and also with high spatial heterogeneity over clear water lakes. Whereas over eutrophic waters, there were more quality controlled satellite retrievals with improved spatial patterns of chl-a. It was shown that S2 MSI is a promising method for studying water bodies but the adjacency to the shore and the level of optically active substances must be considered.
Remote sensing-based products are widely used for scientific research and synoptical monitoring of water resources. The use of satellite-based products provides a less costly and time-consuming alternative to traditional in-situ measurements. The conservation of water resources poses a challenge on multiple levels, including local institutions, authorities, and communities. Therefore, the monitoring of water resources, in addition to provide a scientific output, should devote its efforts also to the publication and sharing of the results. Therefore, communication, coordination, and publishing of data are essential for preserving the water ecosystems.
This work presents the design and implementation of two components of the IT infrastructure for supporting the monitoring of lake water resources in the Insubric area for SIMILE ("Integrated monitoring system for knowledge, protection and valorisation of the subalpine lakes and their ecosystems"; Brovelli et al., 2019) Italy-Switzerland Interreg project. SIMILE monitoring system benefits from various geospatial data sources such as remote sensing, in-situ high-frequency sensors, and citizen science. The infrastructure uses and benefits from Free and Open-Source Software (FOSS), open data and open standards, facilitating the possibility of reuse for other applications.
The designed applications aim at enhancing the decision-making process by providing access to remote-sensing based lake water quality parameters maps produced under the project for Lakes Maggiore, Como and Lugano. The satellite monitoring system for SIMILE considers estimating different water quality parameters (WQP) using optical sensors. The analysed water quality parameters maps include the concentration of Chlorophyll-a (CHL-a), Total Suspended Matter and Lake Surface Water Temperature (LSWT). Each product is delivered with a specific spatial and temporal resolution depending on the sensor used for the monitored parameter. The WQPs maps production frequency is affected by factors such as the revisit time of the sensor over the study area and the cloud coverage. CHL and TSM are monitored with the ESA Sentinel-3A/B OLCI (Ocean and Land Colour Instrument) whose spectral bands include the visible and infrared portions of the spectrum. LSWT is monitored using the NASA Landsat 8 TIRS (Thermal Infrared Sensor). Sentinel-3 A/B offers a daily revisit time over the study area with a resolution of 300m, which, on average, allows for the production of CHL-a and TSM maps weekly. Landsat-8 satellite provides a higher spatial resolution of 30m, but with a revisit time of 16 days, then, on average, allows for the production of LSWT maps monthly.
The archiving and sharing of the WQPs maps are of interest to the SIMILE project. In particular, the project promotes the publication of the data as time series to monitor the evolution of the different WQP maps. WQP maps can support the assessment of various processes taking place inside the aquatic ecosystems, for example, the eutrophication level in a water body from CHL-a. Sediment concentration, which can be deduced from TSM maps can influence the penetration of light, ecological productivity, and habitat quality, and can harm aquatic life. LSWT maps allow exploring lake dynamics processes such as sedimentation, concentration of nutrients and the presence of aquatic life, but also the temporal variability of temperature due to climate change (Lieberherr et al, 2018).
Two web applications have been designed aiming at simplifying the data-sharing process and allowing for the interactive visualization of the WQPs maps. The first one is built on GeoNode, to upload, edit, manage and publish the WQP maps. GeoNode is an open-source Geospatial Content Management System that eases the data-sharing procedures. The second one, is a WebGIS application that aims at providing a user-friendly environment to explore the different WQPs maps. The WebGIS benefits from OGC standards, such as the Web Mapping Service (WMS), to retrieve and display the maps published on the GeoNode application. The publication of the datasets through OGC standards is possible thanks to the GeoServer instance working on the back-end of the GeoNode project.
SIMILE WebGIS goal is favouring the visualization and query of lakes WQP as time series. For this reason, it was possible to exploit the raster data format support available into the data-sharing platform. Indeed, GeoNode permits the upload of raster data in GeoTIFF format, taking advantage of the data storage system implemented by GeoServer. Note that GeoServer provides additional multidimensional raster data support (such as image mosaics and NetCDF), which enables the storage of the collection of datasets with a time attribute. Nonetheless, GeoNode does not support the multidimensional raster data formats, and using them would imply the need of direct interaction with the remote server hosting GeoServer. The interaction with the remote server represents a barrier to the data sharing workflow (due to additional File Transfer Protocols to send the data to the server). The GeoTIFF format does not provide a time attribute. In order to overcome this limitation and allow the management of time series, a naming convention has been introduced and the timestamp is provided in the layer name. Next, for matching layer typologies, it was possible to build groups of layers by extracting unique dates values. The constructed layers groups used the collection event of "LayerGroups" for the "Layer" object in OpenLayers Library. Thus, the time series visualization for WQP in the WebGIS was possible while maintaining GeoNode as a suitable tool for the publication of raster data.
Therefore, the WQP maps are provided with a naming convention which describe the sensor used for the acquisition, the product typology, the coordinate reference system of the map, and the timestamp of the image acquisition, in order to facilitate the integration of maps in the database and the metadata compilation. An example of the naming convention is “S3A_CHL_IT_20190415T093540”. Here, the file name contains information corresponding to the coordinate reference system (“IT”, WGS84 – UTM32N), the sensor involved in the acquisition of the imagery (“S3A”, ESA Sentinel3A-OLCI), the product’s typology (“CHL”, Chlorophyll-a), and the timestamp of the retrieval of the imagery (“20190415T093540”, April 15, 2019, at 09:35:40), all separated by an underscore. The application has been designed to let web client users display the layers in time, taking advantage of the map timestamp. Moreover, the naming convention supported the styling of the layers and the metadata preparation and display.
The WebGIS builds upon a node.js runtime environment that allows creating server-side applications using JavaScript. The WebGIS design benefits from the OpenLayers and JQuery JavaScript libraries and the VueJS framework. Accordingly, the web application integrates capabilities and tools which are built using components that can be attached/detached from the application if needed. The WebGIS components, hereafter panels, include a Layer Panel, a Metadata Panel, a Time Manager Panel and a BaseMap Panel. The different Panels will be populated by parsing the information obtained from the WMS getCapabilities operation from GeoServer. The Layer Panel integrates the list of layers available in GeoNode. Each item in the list of layers allows users to control the visibility of the layers (i.e., display and opacity), download the datasets and explore the metadata (for a selected layer). The Metadata Panel includes an abstract according to the layer typology, the start/end dates for the first/last map, and the symbology to describe the corresponding layer. In addition, the Metadata Panel makes use of the getLegendGraphic operating to retrieve the layer legend. The Time Manager Panel contains controllers that enable the querying and visualization of raster time series. At last, the BaseMap Panel provides various options for changing the base map of the WebGIS.
The web-based application implemented in this work provides a mechanism for sharing and monitoring water quality parameters maps. The infrastructure implements two different applications focusing on two different audiences. First, the collaborative data-sharing platform (GeoNode) that targets the map producers allowed to upload and manage the lake water quality maps (following the naming convention for the products). Second, the WebGIS aims at becoming an open application for the exploration of the products uploaded into the GeoNode platform. The WebGIS provides an interactive application to display the lake water quality products as time series in a user-friendly environment. The components inside the WebGIS provide users to control the visibility of the layers, query maps in time, explore the layers metadata and customize the base map background. Data accessibility for water quality parameters enables the monitoring and assessment of the water bodies health. Moreover, the monitoring of the water resources is mandatory for guaranteeing the livelihood of the nearby communities depending on its consumption and quality.
Brovelli, M. A., Cannata, M., & Rogora, M. (2019). SIMILE, A GEOSPATIAL ENABLER OF THE MONITORING OF SUSTAINABLE DEVELOPMENT GOAL 6 (ENSURE AVAILABILITY AND SUSTAINABILITY OF WATER FOR ALL). ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-4/W20, 3–10. https://doi.org/10.5194/isprs-archives-XLII-4-W20-3-2019
Lieberherr, G.; Wunderle, S. Lake Surface Water Temperature Derived from 35 Years of AVHRR Sensor Data for European Lakes. Remote Sens. 2018, 10, 990. https://doi.org/10.3390/rs10070990
The thaw lakes and drained thaw lake basins are a prominent feature in the Arctic and cover large areas of the landscapes in the high latitudes. Thaw lakes as well as drained thaw lake basins have major impacts on a region’s hydrology, landscape morphology and flora and fauna. Drained lake basins have been studied across regions in the Arctic and differences in abundance and distribution exist between regions in the circumpolar Arctic. Thawing permafrost and drainage lakes can also affect human activities. Our research area is the Yamal peninsula in the Western Siberia, Russia. Yamal Peninsula is about 700 km long and about 150 km wide, extends from 66° to 72° degrees North. In Yamal petroleum industry with related infrastructure networks can be affected by changes in lake and stream hydrology. Nenets reindeer herding is the traditional land use form in the Yamal. Reindeer herding is based on natural pastures and resources, and lakes and streams serve as important fishing resource for own use and sale. Thawing and drained lakes are part of climate change driven landscape changes in the area.
Landsat has been used in multiple studies for the analysis of lake area, extent and drainage or shrinkage events in the circumpolar Arctic. To analyze lake drainage, lake shrinkage and changes in lake extent consistent satellite data with adequate temporal and spatial resolution is needed. Frequent cloud cover in arctic regions during the summer months limits the number of suitable acquisitions of multispectral sensors and hinders the implementation of large-scale time series analysis efforts. Landsat data enables time span from 1972, although Landsat MSS images were rather coarse and good quality images are sparse. Good quality data have only been available since the mid-1980s when Thematic Mapper was launched. Old archival aerial photographs allows looking further back in time, in some cases even to 1940’s, but their limited spatial coverage and availability does enable large scale investigations. Cold war era spy satellite missions like Corona and KH are the only options to expand time span to late 1950’s and early 1960’s.
Our remote sensing datasets cover a period from 1961-2019. Corona data represents the oldest data source and mosaic was compiled from 38 original Corona images. Corona mosaics resolution is about 7 meters. Landsat mosaics are derived from 1980’s and 2010’s data. In addition we use several very high resolution satellite data (Quickbird-2, Worldview-2/3) and drone data to demonstrate lake changes in detail. Field data for verification of drained lakes have been collected from several parts of Yamal. Field data includes observations of changes and vegetation sampling. We have also interviewed several reindeer herders to understand the implications of lake changes for reindeer husbandry.
The changes were observed in the following periods 1961-1988 and 1988-2018. The results show that the disappearance of the lakes occurs throughout the period, but in the latter period the process accelerated. In terms of reindeer husbandry, the issue is multidimensional, as lakes that are quite important for fishing have disappeared in some places. Drained Lake, on the other hand, will soon turn into a good quality pasture land where nutritious grasses and forbs grow, but if drained lake is located in a winter grazing area, it is only a lost fish resource.
Figure: On the left, a partially drained lake, about 10 years ago. Old lake bottom grows with dense grass, sedge and forb carpet. On the right about 2-3 years ago partially drained lake, the revegetation is much slower which is also due to the sandy soil.
Retrieving sea-surface salinity near the sea-ice edge using spaceborne L-band radiometers (SMOS, Aquarius, SMAP) is a challenging task. There are several reasons for this. First, in cold water, the sensitivity of the L-band emitted surface brightness temperature to salinity is small, which results in large retrieval errors. Additionally, it is diffi-cult to both detect the sea-ice edge and accurately measure small sea-ice concentrations near the sea-ice edge. We have evaluated several publicly available sea-ice con-centration products (OSI-SAF, NSIDC CDC, NCEP) and found that none of them meet the accuracy that is required to use them as ancillary input for satellite salinity retriev-als. This constitutes a major obstacle in satellite salinity measurements near the sea-ice edge. As a consequence, in the cur-rent NASA/RSS V4.0 SMAP salinity release, salinity cannot be retrieved over large areas of the polar oceans.
We have developed a mitigation strategy that directly uses AMSR2 TB measurements of the 6 – 36 GHz channels to assess the sea-ice contamination within the SMAP antenna field of view instead of external ancillary sea-ice concentration products. The 6 and 10 GHz AMSR2 TB show very good correlation with the SMAP TB when averaged over several days. Moreover, the spatial resolutions of AMSR2 6 GHz and SMAP TB measurements are very comparable.
Based on this, we have developed a machine-learning algorithm that uses the AMSR2 and SMAP TB as input (1) to detect low sea-ice concentrations within the SMAP foot-print and (2) that allows to remove the sea-ice fraction from the SMAP measurements.
This new algorithm allows for more accurate sea-ice detection and mitigation in the SMAP salinity retrievals than the ancillary ice products do. In particular, the detection of icebergs in the polar ocean and salinity retrievals in the vicinity of sea-ice can be significantly improved. We plan to apply this new method in the upcoming NASA/RSS V5.0 SMAP salinity release.
Understanding surface processes during sea ice melt season in the Arctic Ocean is crucial in the context of ongoing Arctic change. The Chukchi and the Beaufort Seas are the Arctic regions where salty and warm Pacific Water (PW) flows in from the Bering Strait and interacts with sea ice, contributing to its melt during summer. For the first time, thanks to in-situ measurements from two saildrones deployed in summer 2019, and SMOS (Soil Moisture and Ocean Salinity) and SMAP (Soil Moisture Active Passive) satellite Sea Surface Salinity (SSS), we observe large low SSS anomalies induced by sea ice melt, referred as meltwater lenses (MWL).
The largest MWL observed by the saildrones during this period covers a large part of the Chukchi shelf. It is associated with a SSS anomaly reaching 5pss, and persists for a long time (up to one month). In this MWL, the low SSS pattern influences the air-sea momentum transfer in the upper ocean, resulting in a reduced shear of currents between 10 and 20 meters depth.
L-Band radiometric SSS allows an identification of the different water masses found in the region during summer 2019 and their evolution as the sea ice edge retreats over the Chukchi and the Beaufort Seas. Two MWL detected in these two regions exhibit different mechanism of formation: in the Beaufort Sea, the MWL tends to follow the sea ice edge as it retreats meridionally whilst in the Chukchi Sea, a large persisting MWL generated by the advection of a thin sea ice filament is observed.
Taking advantage of the demonstrated ability of SSS satellite observations to monitor MWL and the 12 years-long SMOS time series, we further examine the interannual variability of SSS during sea ice retreat over the Chukchi and Beaufort Seas for the last 12 years.
The aim of work is evaluation phytoplankton’s influence on the carbon cycle, oxygen concentration, and ocean dwellers food web in the Greenland Sea.
There are several tasks to achieve our goal: 1) to study the interaction between chlorophyll and physical properties of the sea water; 2) to determine seasonal cycle of spatial pattern and vertical profile of phytoplankton; 3) to estimate primary production.
To begin with, the Arctic waters are quite unstable in terms of sea ice thickness, open water area, research accessibility, and, moreover, are likely to face ice-free summers in the near future. As a result, these changes provide changes of the light absorbance, nutrient distribution and phytoplankton seasonality. And yet, it is still unknown if it decreases or increases phytoplankton primary production in the Greenland Sea. There is a lack of field data, hence, satellite data provide an alternative.
Phytoplankton are responsible for releasing half of the World’s oxygen and over 90% of marine primary production. Our work combines satellite and field data to investigate seasonal cycle, variability and productivity of phytoplankton in the Greenland Sea (Fram Strait), and apply modelling techniques to estimate primary production of the area.
Satellite HERMES GlobColor data were processed by MatLab/Python. Field data was used to recover Gaussian coefficients in order to apply them to the satellite data, what gives us possibility to implement depth profiles and establish Euphotic depth for every data cell.
From the field data from the Fram Strait’s 2021 RV Kronprinz Haakon summer cruise chlorophyll-a, light absorbance by particulates and primary production were measured. Using remote sensing data of chlorophyll concentration, sea temperature and photosynthetically available radiation, we were able to obtain modelled estimations of primary production. Further, field and satellite data primary production estimations were compared.
We plan that our primary production estimates will be used for validation of other biogeochemical models.
The inter-annual changes of the Arctic Ocean (e.g. dense water formation, meridional heat redistribution) are well-known proxies of global climate change. The ocean circulation in the high latitude seas and Arctic Ocean has significantly changed during recent decades, with significant impact on the socio-economic activities for the locals. Monitoring the Arctic environment is however non-trivial: the Arctic observing network is notably lacking the capability to provide a full picture of the ocean variability due to technological and economical limitations to sample the seawater beneath the sea ice or in the marginal ice zones. This leads to the obvious need of optimizing the exploitation of data from space-borne sensors. For more than two decades, altimetric radars measuring the sea level at millimetric precision have revolutionized our knowledge of the global mean sea level rise and oceanic circulation. Technological solutions are continuously needed and pursued to enhance the spatial resolution of the altimetric signal and enable the solution of the mesoscale dynamics, either in the design of the altimeter itself (e.g. wide-swath altimeters and SAR altimeters) or in the combined use of altimeter data from multiple bands. Newly reprocessed along-track measurements of Sentinel-3A, CryoSat-2, and SARAL/AltiKa altimetry missions (AVISO/TAPAS), optimized for the Arctic Ocean (retracking) have recently been produced in the framework of CNES AltiDoppler project. The tracks of the different satellite mission are then merged to provide altimetry maps with enhanced spatial coverage and resolution. This study is devoted to the exploitation of such satellite altimetry data in high-latitude regions. We investigate the benefits of the reprocessed altimetry dataset with augmented signal resolution in the context of ocean mesoscale dynamics. In particular, we perform a fit-for purpose assessment of this dataset investigating the contribution of eddy-induced anomalies to ocean dynamics and thermodynamics. This is done by co-locating eddies with Argo float profilers, in the areas representing the gateways for the Atlantic waters entering the Arctic and comparing them to fields derived from conventional altimetry maps in order to assess the added value of the enhanced altimetry reprocessing in the northern high latitude seas.
Global warming has a pronounced effect on the frequency and intensity of storm surges in the Arctic Ocean. On the one hand, changes in atmospheric conditions cause more storms to be formed in the Arctic or elsewhere that may enter the Arctic (e.g. Day & Hodges, 2018; Sepp & Jaagus, 2011). On the other hand, the Arctic Ocean is becoming increasingly exposed to atmospheric forcing due to Arctic sea ice decline (ACIA 2005, Vermaire et al. 2013). Modelling studies show that the reduced sea ice extent provides greater fetch and wave action and as such allows higher storm surges to reach the shore (Overeem et al., 2011; Lintern et al., 2011). This may cause increased erosion (e.g., Barnhart et al. 2014) and pose increased risks to fragile Arctic ecosystems in low-lying areas (e.g., Kokelj et al. 2012). In addition, Arctic surges influence global water levels, therefore the impact may also be noticeable at lower latitudes.
However, little is known about the large-scale variability in Arctic surge water levels as the data availability is compromised by environmental conditions. Long water level records from tide gauges are limited to a few locations at the coast and the high-latitudes are poorly covered by satellite altimeters. Moreover, measurements of the Arctic water level by satellite altimeters is hampered by the presence of sea ice. Here, the usage of Synthetic Aperture Radar (SAR) altimeter data provides a solution. These altimeters have a higher along-track resolution than conventional altimeters, which allows to measure water levels from fractures in the sea ice (leads) (Zygmuntowska et al., 2013). However, the location of leads changes over time, and both the temporal and spatial resolutions of the resulting water level data are highly variable. In addition, a proper removal of the tidal signal is required in order to study surge water levels. This may be particularly problematic in the Arctic as the accuracy of global tide models is reduced in polar regions (e.g. Cancet et al., 2017; Lyard et al., 2021; Stammer et al., 2014). This is can for a part be attributed to the beforementioned constrains on data availability, as well as to the seasonal modulation of Arctic tides that is not considered in most global tide models.
In the presented study we aspired to overcome the identified issues and explore the opportunities provided by SAR altimetry in studying storm surge water levels in the Arctic. For this, data are used from two high-inclination missions that are equipped with a SAR altimeter: CryoSat-2 and Sentinel-3. A classification scheme is implemented to distinguish between measurements from sea ice and leads/ocean and data stacking is applied to deal with the restricted temporal and spatial resolution. The tidal signal is removed as much as possible by applying tidal corrections from a global tide model, as well as additional corrections derived from a residual tidal analysis including seasonal modulation of the major tidal constituents. To evaluate the approach, where possible, results are compared to water levels derived from nearby tide gauges. Implications of reduced accuracy in tidal corrections are identified by analyzing the results in the light of the level of tidal activity and seasonal modulation. Finally, temporal variations in surge water levels are linked to the seasonal sea ice cycle and interannual variations in sea ice extent.
References
ASSESSMENT, ARCTIC CLIMATE IMPACT (ACIA). (2005). Impacts of a warming Arctic: Arctic climate impact assessment, scientific report
Barnhart, K. R., Overeem, I., & Anderson, R. S. (2014). The effect of changing sea ice on the physical vulnerability of Arctic coasts. The Cryosphere, 8(5), 1777-1799.
Cancet, M., Andersen, O. B., Lyard, F., Cotton, D., & Benveniste, J. (2018). Arctide2017, a high-resolution regional tidal model in the Arctic Ocean. Advances in space research, 62(6), 1324-1343.
Day, J. J., & Hodges, K. I. (2018). Growing land‐sea temperature contrast and the intensification of Arctic cyclones. Geophysical Research Letters, 45(8), 3673-3681.
Kokelj, S. V., T. C. Lantz, S. Solomon, M. F. J. Pisaric, D. Keith, P. Morse, J. R. Thienpont, J. P. Smol, and D. Esagok (2012), Utilizing multiple sources of knowledge to investigate northern environmental change: Regional ecological impacts of a storm surge in the outer Mackenzie Delta, N.W.T., Arctic, 65, 257–272.
Lintern, D. G., Macdonald, R. W., Solomon, S. M., & Jakes, H. (2013). Beaufort Sea storm and resuspension modeling. Journal of Marine Systems, 127, 14-25.
Lyard, F. H., Allain, D. J., Cancet, M., Carrère, L., & Picot, N. (2021). FES2014 global ocean tide atlas: design and performance. Ocean Science, 17(3), 615-649.
Overeem, I., R. S. Anderson, C. W. Wobus, G. D. Clow, F. E. Urban, and N. Matell (2011), Sea ice loss enhances wave action at the Arctic coast, Geophys. Res. Lett., 38, doi:10.1029/2011GL048681.
Sepp, M., and J. Jaagus (2011), Changes in the activity and tracks of Arctic cyclones, Clim. Change, 105, 577–595.
Stammer, D., Ray, R. D., Andersen, O. B., Arbic, B. K., Bosch, W., Carrère, L., ... & Yi, Y. (2014). Accuracy assessment of global barotropic ocean tide models. Reviews of Geophysics, 52(3), 243-282.
Vermaire, J. C., M. F. J. Pisaric, J. R. Thienpont, C. J. Courtney Mustaphi, S. V. Kokelj, and J. P. Smol (2013), Arctic climate warming and sea ice declines lead to increased storm surge activity, Geophys. Res. Lett., 40, 1386–1390, doi:10.1002/grl.50191.
Zygmuntowska, M., Khvorostovsky, K., Helm, V., & Sandven, S. (2013). Waveform classification of airborne synthetic aperture radar altimeter over Arctic sea ice. The Cryosphere, 7(4), 1315-1324.
It is expected that coupled air-sea data assimilation algorithms may enhance the exploitation of satellite observations whose measured brightness temperatures depend upon both the atmospheric and oceanic states, thus improving the resulting numerical forecasts. To demonstrate in practice the advantages of the fully coupled assimilation scheme, the assimilation of brightness temperatures from a forthcoming microwave sensor (the Copernicus Imaging Microwave Radiometer, CIMR) is evaluated within idealized assimilation and forecast experiments. The forecast model used here is the single-column version of a state-of-the-art Earth system model (EC-Earth), while a variational scheme, complemented with ensemble-derived background-error covariances, is adopted for the data assimilation problem.
The Copernicus Imaging Microwave Radiometer (CIMR), scheduled for the 2027+ timeframe, is a high priority mission of the Copernicus Expansion Missions Programme. Polarised (H and V) channels centered at 1.414, 6.925, 10.65, 18.7 and 36.5 GHz are included in the mission design under study. CIMR is thus designed to provide global, all-weather, mesoscale-to-submesoscale resolving observations of sea-surface temperature, sea-surface salinity and sea-ice concentration. The coupled observation operator is derived as polynomial regression from the application of the Radiative Transfer for TOVS (RTTOV) model, and we perform Observing System Simulation Experiments (OSSE) to assess the benefits of different assimilation methods and observations in the forecasts.
Results show that the strongly coupled assimilation formulation outperforms the weakly coupled one in both experiments assimilating atmospheric data and verified against oceanic observations and experiments assimilating oceanic observations verified against atmospheric observations. The sensitivity of the analysis system to the choice of the coupled background-error covariances is found significant and discussed in detail. Finally, the assimilation of microwave brightness temperature observations is compared to the assimilation of the corresponding geophysical retrievals (sea surface temperature and salinity and marine winds), in the coupled analysis system. We found that assimilating microwave brightness temperatures significantly increases the short-range forecast accuracy of the oceanic variables and near-surface wind vectors, while it is neutral for the atmospheric mass variables. This suggests that adopting radiance observation operators in oceanic and coupled applications will be beneficial for operational forecasts.
The ocean tides are one of the major contributors to the energy dissipation in the Arctic Ocean. In particular, barotropic tides are very sensitive to friction processes, and thus to the presence of sea ice in the Polar regions. However, the interaction between the tides and the ice cover (both sea ice and grounded ice) is poorly known and still not well modelled, although the friction between the ice and the water due to the tide motions is an important source of energy dissipation and has a direct impact on the ice melting. The variations of tidal elevation due to the seasonal sea-ice cover friction can reach several centimeters in semi-enclosed basins and on the Siberian continental shelf. These interactions are often simply ignored in tidal models, or considered through relatively simple combinations with the bottom friction.
In the frame of the Arktalas project funded by the European Space Agency, we have investigated this aspect with a sensitivity analysis of a regional pan-Arctic ocean tide hydrodynamic model to the friction under the sea ice cover, in order to generate more realistic simulations. Different periods of time, at the decadal scale, were considered to analyze the impact of the long-term reduction of the sea ice cover on the ocean tides in the region, and at the global scale. Tide gauge and satellite altimetry observations were specifically processed to retrieve the tidal harmonic constituents over different periods and different sea ice conditions, to assess the model simulations.
Improving the knowledge on the interaction between the tides and the sea ice cover, and thus the performance of the tidal models in the Polar regions, is of particular interest to improve the satellite altimetry observation retrievals at high latitudes, as the tidal signals remain a major contributor to the error budget of the satellite altimetry observations in the Arctic Ocean, but also to generate more realistic simulations with ocean circulation models, and thus contribute to scientific investigations on the changes in the Arctic Ocean.
The Arctic Ocean is the ocean most vulnerable to climate change. Accelerating air and ocean temperatures and deglaciation of land and sea ice alters the physical dynamics of the Arctic Ocean that impacts the sea level. Hence is sea level a bulk measure of ongoing climate related processes.
A unique feature of the Arctic Ocean is that freshwater change is the most significant contribution to sea level change. Freshwater coming from land, sea ice and rivers expands the water column and changes the dynamics of ocean currents going in and out of the Arctic Ocean. For sea level analysis of the Arctic Ocean, is the steric sea level change (change of ocean water density from temperature and salinity changes) often either inverted from satellite observations (sea surface height (SSH) from altimetry minus ocean bottom pressure (OBP) from GRACE) or based on oceanographic models that are constrained with a mix of in-situ observations, altimetry and GRACE.
Recently, studies (1,2) have shown that a satellite-independent steric sea level estimate has shown to better reconstruct the sea level features observed from altimetry compared to oceanographic models. The steric estimate (DTU Steric 2020) is composed from more than 300,000 Arctic in-situ profiles, which are interpolated into a monthly 50x50 km grid from 1990 to 2015. A further advantage is the independence of altimetry (and GRACE) and therefore ideal to be used for sea level budget analysis. Some regions with sparse in-situ observations (in particular the East Siberian Seas), showed less correlation with altimetry, but is also a region with poor tide-gauge/altimetry agreement (3,4), making it difficult to validate either of the datasets.
Here we present an update of the steric sea level product presented in (1). It now includes temperature and salinity profiles up to end 2020, representing a 31-year period from 1990-2020. Additionally, the profile data is assimilated with satellite surface salinity data from SMOS and satellite sea surface temperature data from GHRSST (Group for High-Resolution Sea Surface Temperature). Furthermore, the Arctic Ocean is divided into nine significant regions, giving a better overview of significant features and statistics of the Arctic steric sea level change. The extended timeseries allows to investigate long-term climate trends of the Arctic Ocean, which can be validated against an equal long record of altimetric sea level observations (1991-2010 up to 82N, 2011-2020 up to 88N). The dataset is useful for wide range of users looking at changes in heat content, freshwater changes, validating sea level observations (from tide gauges and altimetry) and ocean bottom pressure from GRACE/GRACE-FO (i.e. constrain leakage corrections).
1) Ludwigsen, C. A., & Andersen, O. B. (2021). Contributions to Arctic sea level from 2003 to 2015. Advances in space research, 68(2), 703-710. https://doi.org/10.1016/j.asr.2019.12.027
2) Ludwigsen, C. B., Andersen, O. B., & Kildegaard Rose, S (2021). Components of 21 years (1995-2015) of Absolute Sea Level Trends in the Arctic. Ocean Science (pre-print)
3) Armitage, T. W. K., Bacon, S., Ridout, A. L., Thomas, S. F., Aksenov, Y., & Wingham, D. J. (2016). Arctic sea surface height variability and change from satellite radar altimetry and GRACE, 2003-2014.
4) Kildegaard Rose, S., Andersen, O. B., Passaro, M., Ludwigsen, C. A., & Schwatke, C. (2019). Arctic Ocean Sea Level Record from the Complete Radar Altimetry Era: 1991-2018. Remote Sensing, 11(14), 1672. https://doi.org/10.3390/rs11141672
Recent observational and modelling studies have documented changes in the hydrography of the upper Arctic Ocean, in particular an increase of its liquid freshwater content (e.g., Haine et al. 2015, Proshutinsky et al. 2019, Solomon et al. 2021). The main factors contributing to this freshening are the melting of the Greenland ice sheet and glaciers, enhanced sea-ice melt, an increase of river discharge, increase in liquid precipitation and an increase of Pacific Ocean water influx to the Arctic Ocean through the Bering Strait. A retreating-thinning sea ice cover, and a concomitant warming-freshening upper ocean, have a widespread impact across the whole Arctic system through a large number of feedback mechanisms and interactions also with the atmospheric circulation of the northern hemisphere, having the potential to destabilize the thermo-haline circulation in the Northern Atlantic.
An increase of liquid freshwater content has been found over both the Canadian Basin and the Beaufort Sea that can have a large impact on the Arctic marine ecosystem. The importance of monitoring changes in the Arctic freshwater system and its exchange with subarctic oceans has been widely recognized by the scientific communities.
Among the key observable variables, ocean salinity is a proxy for freshwater content and allows to monitor increased freshwater from rivers or ice melt, and it sets the upper ocean stratification, which has important implications in water mass formation and heat storage. Changes in the salinity distribution may affect the water column stability and impact the freshwater pathways over the Arctic Ocean. Sea Surface Salinity (SSS) is observed from space with the L-band (1.4GHz) radiometers such as SMOS (ESA, since 2010) and SMAP (NASA, since 2015). However, retrieving SSS in cold waters is challenging, for different factors. Thanks to the ESA funded the ARCTIC+SSS ITT project, we have now a new enhanced Arctic SMOS Sea Surface Salinity product BEC v.31, which has better quality and resolution than the previous high latitude salinity products which permit to better monitor salinity changes, and thus freshwater.
In this presentation we will show the first results of the surface salinity tendency analysis done with the new SMOS BEC SSS v3.1 product in the Beaufort Sea, and other Arctic regions during summer for the period from 2011 to 2021. We will compare the results with other model (TOPAZ) and satellite observations (CryoSat and GRACE). Only summer results will be shown since, observations of SSS are feasable only when the ocean is free of ice. This preliminary analysis shows a clear freshening in the sea surface salinity the Beaufort Gyre region from 2012 to 2019.
Basal melting of floating ice shelves and iceberg calving constitute the two almost equal paths of freshwater flux (FWF) between the Antarctic ice cap and the Southern Ocean. For the Greenland ice cap the figure are quite similar even if surface melting plays a more significant role.
Basal meltwater and surface melt water are distributed over the upper few hundred meters of the coastal water column while icebergs drift and melt farther away from land.
While the northern hemisphere icebergs are, except for rare exception small (less than10km2), In the southern ocean large icebergs (larger than 100km2) act as a reservoir to transport ice far away from the Antarctic coast into the ocean interior, while fragmentation acts as a diffusive process. It generates plumes of small icebergs that melt far more efficiently than larger ones.
OGCM that include iceberg show that basal ice-shelf and iceberg melting have different effects on the ocean circulation and that icebergs induce significant changes in the modeled ocean circulation and sea-ice conditions around Antarctica or in the northern Atlantic. The transport of ice away from the coast by icebergs and the associated FWF cause these changes. These results highlight the important role plaid by icebergs and their associated FWF play in the climate system. However, there is actually no direct reliable estimate of the iceberg FWF to either validate or constrain the models.
Since 2008 the ALTIBERG project maintains a small iceberg (less than 10km2) database (North and South) using a detection method based on the analysis of satellite altimeter waveforms (http://cersat.ifremer.fr/data/products/catalogue). The archive of positions, areas, dates of icebergs as well as the monthly mean volume of ice icebergs covers now the 1992-present period.
Using classical iceberg's motion and thermodynamics equations constrained by AVISO currents, ODYSEA SST's and Wave Watch 3 wave heights, the trajectories and melting of all detected ALTIBERG icebergs are computed. The results are used to compute the daily FWF.
The FWF's temporal and spatial distribution from 1993 to 2019 are presented as well as the estimation method. The north Atlantic FWF, which has also been estimated, will also be analyzed.
Figure 1 presents the mean daily FWF, the mean daily volume of ice as well as the mean surface and thickness of the icebergs for the 1993-2019 period on a 50x50km grid.
Temperature rise and the immediate effect it has in the Arctic calls for increased monitoring of sea surface temperature (SST), which demands the highest possible synergy between the different sensors orbiting Earth, both on present and future missions. One example is the possible synergy between Sentinel-3’s SLSTR and the future Copernicus expansion satellite: Copernicus Imaging Microwave Radiometer (CIMR), which is currently in development phase. To achieve consistency between the observations from the different missions, there is a need to establish a relation between skin and subskin SSTs, which are measured by infrared and microwave sensors respectively. That will lead to the creation of more homogeneous and higher accuracy datasets that could be used to monitor climate change in greater detail and to be assimilated into climate models.
To address the aforementioned issue, the Danish Meteorological Institute (DMI) and the Technical University of Denmark (DTU) performed, on June 2021, a week-long intercomparison campaign between Denmark and Iceland, where they collected data by simultaneously deploying a microwave and an infrared radiometer side-by-side. The work was a part of the ESA funded project SHIPS4SST (ships4sst.org) and the International Sea Surface Temperature Fiducial Reference Measurement Radiometer Network (ISFRN) where shipborne IR radiometer deployments have been conducted between Denmark and Iceland for several years. In this particular campaign, two ISARs (Infrared Sea Surface Temperature Autonomous Radiometer), measuring on the 9.6 – 11.5 μm spectral band, were deployed alongside two recently refurbished EMIRADs, namely EMIRAD-C and EMIRAD-X, measuring on C and X band respectively.
This study aims at demonstrating the methodology applied within ESA CCI SST to retrieve SST from the microwave brightness temperature, and present a first attempt to establish a relationship between skin and subskin SST, as well as the overall research progress so far.
Whilst the Arctic Ocean is relatively small, containing only 1% of total ocean volume, it receives 10% of global river runoff. This river runoff is a key component of the Arctic hydrological cycle, providing significant freshwater exchange between land and the ocean. Of this runoff, Russian rivers alone contribute around half of the total river discharge, or a quarter of the total freshwater to the Arctic Ocean, predominantly to the Kara and Laptev Seas. In these seas, inflowing riverine freshwater remains at the surface, helping to form the cold, fresh layer that sits above inflowing warm and salty Atlantic Water; this sets up the halocline that governs Eurasian shelf sea, and wider Arctic Ocean, stratification. This fresh surface layer prevents heat exchange between the underlying Atlantic Water and the overlying sea ice, limiting sea ice melt and strengthening the existing sea ice barrier to atmosphere-ocean momentum transfer. However, the processes that govern variability in riverine freshwater runoff and its interactions with sea ice are poorly understood and are key to predicting the future state of the Arctic Ocean. Understanding these processes is particularly important in the Laptev Sea as a source region of the Transpolar Drift and a key region of sea ice production and deep water formation (Reimnitz et al., 1994).
Over most of the globe, L-band satellite acquisitions of sea surface salinity (SSS), such as from Aquarius (2011–2015), SMOS (2010- present), and SMAP (2015-present), provide a new tool to study freshwater storage and transport. However, the low sensitivity of L-band signal in cold water and the presence of sea ice makes retrievals at high latitudes a challenge. Nevertheless, retreating Arctic sea ice cover and continuous progress in satellite product development make the satellite based SSS measurements of great value in the Arctic. This is particularly evident in the Laptev Sea, where gradients in SSS are strong and in situ measurements are sparse. Previous work has demonstrated a good consistency of satellite based SSS data against in situ measurements, enabling greater confidence in acquisitions and making satellite SSS data a truly viable potential in the Arctic (Fournier et al., 2019; Supply et al., 2020).
This study combines satellite based SSS data, in-situ observations and reanalysis products to study the roles of Lena river discharge, ocean circulation, vertical mixing and sea ice cover on interannual variability in Laptev Sea dynamics. Comparison of two SMOS products, SMOS LOCEAN CEC L3 Arctic v1.1 and SMOS BEC Arctic L3 v3.1, and two SMAP SSS products, SMAP JPL L3 v5.04.2 and SMAP RSS L3 v4.03 were considered. Whilst the general patterns of salinity are broadly similar in all products, their patterns differ interannualy, with particular discrepancies in magnitude. Interannual variability in the LOCEAN SMOS SSS closely resembles that in both SMAP products, most notably in the magnitude and direction of Lena river plume propagation. However, the mean state of SMAP RSS SSS is much fresher than the other products. Comparison against TOPAZ reanalysis highlights similar interannual pattern with both SMAP products and SMOS LOCEAN, but with lower amplitude. The close resemblance of the SMOS CEC LOCEAN and the SMAP products gives confidence in using the full SMOS LOCEAN timeseries (2012-present) to study interannual variability on a 10-year time scale. The full SMOS LOCEAN timeseries shows that two years (2018 and 2019) stand out as having a much larger, fresher river plume than other years. However, the larger plume in these years doesn’t appear to be caused by increases in Lena river runoff. Numerical model output, in-situ data and satellite products are used to study the cause of this variability.
Bibliography
Fournier, S., Lee, T., Tang, W., Steele, M., Olmedo, E., 2019. Evaluation and Intercomparison of SMOS, Aquarius, and SMAP Sea Surface Salinity Products in the Arctic Ocean. Remote Sens. 11, 3043. https://doi.org/10.3390/rs11243043
Reimnitz, E., Dethleff, D., Nürnberg, D., 1994. Contrasts in Arctic shelf sea-ice regimes and some implications: Beaufort Sea versus Laptev Sea. Mar. Geol., 4th International Conference on Paleoceanography (ICP IV) 119, 215–225. https://doi.org/10.1016/0025-3227(94)90182-1
Supply, A., Boutin, J., Vergely, J.-L., Kolodziejczyk, N., Reverdin, G., Reul, N., Tarasenko, A., 2020. New insights into SMOS sea surface salinity retrievals in the Arctic Ocean. Remote Sens. Environ. 249, 112027. https://doi.org/10.1016/j.rse.2020.112027
Sea level observations from satellite altimetry in the Arctic Ocean are severely limited due to the presence of sea ice. To determine sea surface heights and enable studies of the ocean surface circulation, it is necessary to first detect openings in the sea ice cover (leads and polynyas) where the ocean surface is exposed. This is of particular interest in the coastal areas of the Arctic, where glaciers calve into the Arctic Ocean. The increasing freshwater influx in the last years leads to changes in the sea level and the thermohaline circulation.
The ESA Explorer mission Cryosat-2 was launched in 2010, aiming at the monitoring of the cryosphere. The satellite works in three different acquisition modes. One of these modes is the interferometric SAR (InSAR) mode. The radar returns (called waveforms) of this mode are characterized by a higher temporal resolution, which allows a more reliable detection of leads and polynyas in coastal areas. An unsupervised classification approach based on Machine Learning is implemented for Cryosat-2 InSAR waveforms. The classification approach utilizes differences in scattering properties from sea ice, open ocean, and calm enclosed ocean. By defining quantitative parameters from the waveform shape, the waveforms are grouped by comparing the similarity of the parameters without the necessity of pre-classified data. The classification performance is validated against optical images of spatiotemporally overlapping aircraft overflights. An algorithm is implemented to automatically detect leads from the optical images while minimizing the time difference between altimetry and optical observations.
The implementation of an unsupervised detection of open water in the Arctic Ocean environment is part of the recently launched AROCCIE project (ARctic Ocean surface circulation in a Changing Climate and its possible Impacts on Europe). The aim of the project is to combine satellite altimetry with numerical ocean modelling to determine changes in Arctic Ocean surface circulation from 1995 to present. AROCCIE will use the classification of InSAR data to create a more comprehensive dataset of sea surface heights for further analysis of ocean circulation changes in the vicinity of the Arctic's rugged coastlines.
Accurate sea surface temperature (SST) observations are crucial for climate monitoring, understanding of air-sea interactions as well as in weather and sea ice forecasts through assimilation into ocean and atmospheric models. In general, two types of retrieval algorithms have been used to retrieve SST from passive microwave satellite observations: statistical algorithms and physical algorithms based on the inversion of a radiative transfer model (RTM). The physical algorithms are constrained by the accuracy of the RTM and the representativeness of the of the observation and prior error covariances. They can be used to identify measurement errors but require ad-hoc corrections of the geophysical retrievals to take these into account. Statistical algorithms may account for some of the measurement errors through the coefficient derivation process, but the retrievals are limited to the established relationships between the input variables. Machine learning (ML) algorithms may supplement or improve the existing retrieval algorithms through their higher flexibility and ability to recognize complex patterns in data.
In this study, several types of ML algorithms have been trained and tested on the global ESA SST CCI multi-sensor matchup dataset, with a focus on their performances in the Arctic region. The machine learning algorithms include two multilayer perceptron neural networks (NNs) and different types of ensemble algorithms e.g. a random forest algorithm and two boosting algorithms: least-squares boosting and the Extreme Gradient Boosting (XGB). The algorithms have been evaluated for their capability to retrieve SST from passive microwave (PMW) satellite observed brightness temperatures from the Advanced Microwave Scanning Radiometer – Earth Observing System (AMSR-E). To validate the algorithms independent SST observations from drifting buoys have been used. The performance of the ML algorithms has been compared and evaluated against the performance of an existing state of the art regression (RE) algorithm with a focus on the Arctic. In general, the ML algorithms show good global performances with decreasing performances towards higher latitudes. The XGB algorithm performs best in terms of bias and standard deviation followed by the NNs and the RE algorithm. The boosting algorithms and the NNs are able to reduce the bias in the Arctic compared to the other ML algorithms. For each of the ML algorithms, the sensitivity (i.e. the change in retrieved SST per unit change in the true SST) has been estimated for each matchup by using simulated brightness temperatures from the Wentz/DMI forward model. In general, the sensitivities are lower in the Arctic compared to the global averages. The highest sensitivities are found using the neural networks, and the lowest using the XGB algorithm, which underlines the importance of including sensitivity estimates when evaluating retrieval performances.
The good performance of the ML algorithms compared to the state of the art RE algorithm in this initial study demonstrates that there is a large potential in the use of ML techniques to retrieve SST from PMW observations. The ML methodology, where the algorithms select the important features based on the information in the training data, work well in complex problems where not all physical and/or instrumental effects are well determined. A suitable ML application could e.g. be in a commissioning phase of new satellites (e.g. for the Copernicus Imaging Microwave Radiometer (CIMR) developed by ESA).
The Arctic has warmed more than twice the global rate, which makes it a crucial region to monitor surface temperatures in this region. Global surface temperature products are fundamental for assessing temperature changes, but for the Arctic sea ice these products are traditionally only built on near-surface air temperature measurements from weather stations and sparse drifting buoy temperature measurements. However, only limited in situ observations are actually available in the Arctic due to the extreme weather conditions and limited access. Therefore, satellite observations have a large potential to improve the surface temperature estimates in the Arctic Ocean due to the good temporal and spatial coverage.
We present the first satellite derived combined and gap-free (L4) climate data set of sea surface temperatures (SST) and ice surface temperatures (IST) covering the Arctic Ocean (>58°N) for the period 1982-2021. The derived L4 SST/IST climate data set has been generated as a part of the Copernicus Marine Environment Monitoring Service (CMEMS) and the National Centre for Climate Research (NCKF). The data set has been generated by combining multi-satellite observations using statistical optimal interpolation (OI) to obtain daily gap-free fields with a spatial resolution of 0.05°. Due to the different characteristics of the open ocean, sea ice and the marginal ice zone (MIZ), the OI statistical parameters have been derived separately for each region. Therefore, it is very important with an accurate sea ice concentration (SIC) field for identifying the regions. Here, a combination of several SIC products and additional filtering have been used to produce an improved SIC product.
Observations from drifting buoys, moored buoys, ships and the Icebridge campaigns have been used to validate the L4 SST/IST over the ocean and sea ice. The combined sea and ice surface temperature of the Arctic Ocean provides a consistent climate indicator which can be used to monitor day-to-day variations as well as long term climate trends. The combined sea and ice surface temperature of the Arctic Ocean has increased with more than 4°C over the period from 1982 to 2021.
Like other areas of climate science, Arctic Sea ice forecasting can be improved by using advanced data assimilation (DA) to combine model simulations and observations. We consider the Ensemble Kalman filter (EnKF), one of the most popular DA methods, widely used in climate modelling systems. In particular, we apply a deterministic Ensemble Kalman filter (DEnKF) to the Lagrangian sea ice model neXtSIM (neXt-generation Sea Ice Model). neXtSIM implements a novel Brittle Bingham–Maxwell sea ice rheology, computationally solved on a time-dependent evolving mesh. This latter aspect represents a key challenge to the EnKF as ensemble member size and nodes are generally different. The DEnKF analysis is performed on a fixed reference mesh via interpolation and then projected back to the individual ensemble meshes. We propose an ensemble-DA forecasting system for Arctic sea ice forecast by assimilating the OSI-SAF sea ice concentration (SIC) and the CS2SMOS sea ice thickness (SIT). The ensemble is generated by perturbing atmospheric and oceanic forcing online throughout the forecast. We evaluate the impact of sea-ice assimilation on the Arctic winter sea-ice forecast skills against the satellite observations and a free run during the 2019-2020 Arctic winter. We have obtained significant improvements in SIT but fewer improvements regarding the other ice states. The improvements are mainly due to assimilating the relevant observations. It also shows that neXtSIM as a stand-alone sea ice model is computationally efficient by using external forcing which has assimilated observations and keep good forecast skills.
Traditional global sea surface temperature (SST) analyses have often struggled in high-latitude regions. The challenges that exist are numerous (sea-ice cover, cloud, perpetual dark, perpetual sunlight, insufficient in situ for validation & bias correction, anomalous atmospheric conditions). In this presentation, we outline the prospects for a new high-resolution sea surface temperature analysis specifically for the Arctic region. There are many reasons why such a product is desirable now. Firstly, the Arctic region is anticipated to be the most sensitive to climatic change, and has already experienced a number of substantially anomalous years. Sea ice cover has been decreasing, and yet is still highly variable. The development and progression of the polar front has a major influence on mid-latitude Northern Hemisphere weather patterns (storm tracks, cold air outbreaks, etc.). Accurate knowledge of high-latitude sea surface temperature is crucial for the prediction of sea ice growth and decay, along with estimation of air-sea fluxes, ecological processes and monitoring of overall conditions. Many research areas within the Arctic section of this symposium will benefit from such a dataset.
An equally important aspect of this presentation is the illustration of limitations in existing SST products in the Arctic region. This is particularly important for end-users who may be utilizing products while being largely unaware of the issues. The biggest challenge is ensuring that the available data are fully exploited, i.e. that potentially valid observations are excluded due to quality control (cloud screening, etc.) procedures that have not been optimized for the Arctic region. We use matches with high-latitude saildrone data to explore the impact of current cloud detection schemes and indicate how improvements can be made. Similarly, ice masking may deprive users of valuable observations in the marginal ice zone. Other issues we explore include the correction for atmospheric effects in Arctic atmospheres, which are out-of-family compared with lower latitude oceans where algorithms have been developed and validated. In this regard, we show that the dual-view capabilities of the Sentinel-3 SLSTR instrument can provide a valuable reference. The need for significantly different approaches to quality control and assimilation are explained, along with the need for proxy observations under sea ice. The interdependence of these observation types and models requires a coordinated approach in order to achieve success.
Ocean surface current is yet poorly observed by satellite remote sensing in comparison to other sea surface variables, such as the surface temperature, the surface winds and waves field, among others. Over the recent years, a rich of radar missions dedicated to the ocean current mapping at the global scale have emerged. Most of the proposed techniques take advantage of the Doppler shift obtained by the phase-resolving radars, which is associated with the sea surface motion. Both observational and simulation efforts have demonstrated that apart from the satellite motion, the total Doppler shift is composed of contributions from the surface winds, ocean waves, in addition to the underlying ocean surface current. The knowledge of concurrent wind and wave component is essential for the removal of their impact to acquire the geophysical Doppler shift. Such Doppler shift residual could be directly converted to the line-of-sight current velocity. Successful applications of this technique to observe major ocean current have been illustrated based on the single-antenna synthetic aperture radar systems, which proves the feasibility of Doppler shift method for further exploitation. A radar mission designed for concurrent measurements of wind, waves and current is under development. As the preparatory studies for this mission, we conduct the simulation of Doppler shift from sea surface with a wide variety of sea state conditions and different radar configurations (polarization, radar frequency and incidence angles et al.). Results illustrate that the contribution of winds and waves constitutes a major part of total Doppler shift, particularly when the underlying surface current is relatively weak. This further evidences the necessity to remove the wind/wave component for accurate retrieval of surface current in the future operational processing. Given the variable sensitivity of polarization to the ocean waves, dual-polarized Doppler shift brings more information than single-polarized channel, which could be promoted in the radar system configuration. The simulation study strengthens our confidence on this pending mission to enhance the observational capability of ocean surface current on top of other concepts.
Using signals of opportunity (SoOP), i.e. signals already transmitted for uses different from remote sensing, is an advantageous way to carry out bi-static observations at a reduced cost, as the transmitter is already operated for its primary use. Consolidated examples of this are GNSS Radio Occultation measurements [e.g., 1] and GNSS Reflectometry measurements [e.g., 2, 3] done from space.
Different research projects have been carried out during the last decade in order to use SoOP at higher frequencies e.g. [4, 5], and thus shorter wavelengths, to study the ocean surface. Parameters of interest are the sea surface roughness and sea surface altimetry. Candidate source of opportunity are FM-radio or digital satellite TV signals broadcasted from geostationary orbit. In particular, digital satellite TV signals have a very large potential thanks to i.) the large number of broadcasting satellites (~300), ii.) their extremely large total bandwidth which can span up to 2 GHz when many TV channels are considered, and iii.) the stronger available power compared to GNSS signals. This results in an expected precision of few cm in altimetric sea-surface observations [6].
In addition to these potentialities, when considering the larger available power, digital satellite TV signals can be used in bi-static geometries different from forward scattering. In these geometries, the Doppler signature of the reflected signals is also affected by the horizontal movement of the reflecting target. Thus, the horizontal velocity component of the ocean waves and the ocean current will affect the Doppler frequency of the reflected signal. In addition, as the wavelength of these signals is shorter (λ~2.5 cm) the Doppler frequency will have also a larger value compared to GNSS signals. An experimental demonstration of estimating the water velocity of a river using digital satellite TV signals can be found in [7].
In August 2021, an experimental campaign was carried out at the Majorca Island. On top of its highest peak (Puig Major 1480 m) two antennas were installed. The first one was used to acquire the direct TV signals transmitted from the ASTRA 1M (19.2E) satellite. The second antenna was pointed towards the sea to collect the signals that bounced off the sea surface, in non-specular but back- and side-scattering geometry. Direct and reflected signals were down-converted to IF, digitized at 80 Msps and stored on a SSD hard drive. Different data acquisitions were carried out in a variety of conditions: Signals of different TV channels were used, thus having diversity in wavelength. The down-looking antenna was also pointed at different elevation angles and azimuths with respect to the direction of the waves/currents.
The recorded data are being post processed. We will present preliminary results of the experimental campaign in order to try to establish the main aspects to be considered in a future airborne or space-borne instrument for a cost-effective direct measurement of the sea surface currents.
[1] Kursinski, E. R., Hajj, G. A., Schofield, J. T., Linfield, R. P., & Hardy, K. R. (1997). Observing Earth's atmosphere with radio occultation measurements using the Global Positioning System. Journal of Geophysical Research: Atmospheres, 102(D19), 23429-23465.
[2] Foti, G., Gommenginger, C., Jales, P., Unwin, M., Shaw, A., Robertson, C., & Rosello, J. (2015). Spaceborne GNSS reflectometry for ocean winds: First results from the UK TechDemoSat‐1 mission. Geophysical Research Letters, 42(13), 5435-5441
[3] Ruf, C. S., Atlas, R., Chang, P. S., Clarizia, M. P., Garrison, J. L., Gleason, S., ... & Zavorotny, V. U. (2016). New ocean winds satellite mission to probe hurricanes and tropical convection. Bulletin of the American Meteorological Society, 97(3), 385-395.
[4] Ribó, S., Arco, J. C., Oliveras, S., Cardellach, E., Rius, A., & Buck, C. (2014). Experimental results of an X-Band PARIS receiver using digital satellite TV opportunity signals scattered on the sea surface. IEEE Transactions on Geoscience and Remote Sensing, 52(9), 5704-5711.
[5] Shah, R., Garrison, J. L., & Grant, M. S. (2011). Demonstration of bistatic radar for ocean remote sensing using communication satellite signals. IEEE Geoscience and Remote Sensing Letters, 9(4), 619-623.
[6] Shah, R., Garrison, J., Ho, S. C., Mohammed, P. N., Piepmeier, J. R., Schoenwald, A., ... & Bradley, D. (2017, July). Ocean altimetry using wideband signals of opportunity. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 2690-2693). IEEE.
[7] Ribó, S., Cardellach, E., Fabra, F., Li, W., Moreno, V., & Rius, A. (2018, September). Detection and Measurement of Moving Targets Using X-band Digital Satellite TV Signals. In 2018 International Conference on Electromagnetics in Advanced Applications (ICEAA) (pp. 224-227). IEEE.
We undertake numerical experiments to show how observations of the geostrophic currents based on satellite data like the Sentinel-1 RVL products would influence and potentially improve the ”geodetic” (i.e. satellite-based only) estimation of the mean dynamic topography. The dynamic topography is the divergence of the sea surface from a hypothetical ocean at rest (the geoid) resulting from various “dynamic” processes. In particular, the mean dynamic topography is related to the steady state circulation in the oceans and consequently has meaning for studying global mass and heat transport. In this study we restrict ourselves to a mean model of the dynamic topography and assume a static gravity field. A purely observation driven approach is the joint estimation by means of a least-squares adjustment in which the sea surface height as measured by satellite altimetry is modelled as the sum of the geoid undulation and the dynamic topography. Supplementary to altimetric observations are gravity field solutions obtained from space missions, e.g. GRACE and/or GOCE, that are required to separate the two signals. Such an approach yields a so-called geodetic model of the dynamic topography that is independent of strictly oceanographic models that implement ocean physics. This enables its use in validation of oceanographic models as well as providing input data for combined models (“data assimilation”). A great challenge of the geodetic approach lies in the inconsistencies in spatial resolution between the different observation types. While the altimetry data boasts high resolution along-track (across-track depends on mission), the gravity field data is coarser on the order of one or two magnitudes. Thus it is difficult to separate the higher frequency signal that can be seen in the altimetry. For this to succeed it is required to introduce either higher resolution gravity data and/or a sufficiently accurate and preferably homogeneously sampling source of information for the dynamic topography, both under the premise of being satellite-only. Our hypothesis is that a huge opportunity comes with Doppler-derived surface current velocity measurements from SAR-satellites like Sentinel-1. Assuming the feasibility of reducing these observations to reflect geostrophic surface currents, these can be directly mapped to the spatial gradient of the dynamic topography. Such data points then provide exclusive information in the joint estimation, yielding a more stable separation. The presented study evaluates the potential gains that could be achieved by incorporating satellite-based measurements of the geostrophic surface currents, e.g. reduced Sentinel-1 RVL WV-mode type observations.
Spaceborne doppler scatterometer is a newly developed radar for ocean surface wind and current field remote sensing. Direct measurement of the global ocean surface current is of great scientific interest and application value for understanding multiscale ocean dynamics, air-sea interaction, ocean mass and energy balance, and ocean carbon budget as well as their variabilities under climate change. DOPpler Scatterometer (DOPS) onboard Ocean Surface Current multiscale Observation Mission(OSCOM) is a dual frequency doppler radar can directly measure ocean surface currents with a high horizontal resolution of 5–10 km and a swath larger than 1000km. DOPS is proposed to use a real-aperture radar with a conically scanning system. The designed satellite orbit is a sun-synchronous orbit with an altitude of 600 km and a field angle from 46° - 48°, corresponding to a ground swath larger than 1000 km. The geometry of the Doppler Scatterometer observationis shown in Figure 1. The aperture of the antenna is about 2 meters, so the beam width of DOPS is about 0.3deg in Ka band and 0.8deg in Ku band. Thus,the azimuth resolution is better than 5 km in Ka band and 10km in Ku band respectively at an orbital altitude of 600km.
At system level, an end to end simulation of DOPS is carried out to evaluate the doppler accuracy. The system noise and nonlinear effect, such as the distortion caused by power amplifier are simulated to evaluated the doppler detection errors. Based on these simulations, the transmit power of DOPS is evaluated to ensure the enough echo SNR for doppler detection. For power amplifier simulations, both Rapp nonlinear model (for solid state amplifiers) and Saleh nonlinear model (for solid state amplifiers) are established. For noise simulations, random noise and phase noise are injected to sea surface echoes. As a result, nonlinear distortion of power amplifier will cause about 0.01m/s doppler error at a low saturation status. For a wide band random noise, 10dB SNR will cause about 0.02-0.03m/s doppler error. The doppler measurement errors of incidence angle and observation azimuth are also evaluated. These errors are caused by satellite attitude determinations, where satellite attitude contains the pitch, the yaw, and the roll. As the results of simulation, to achieve the current velocity accuracy better than 0.1m/s, the measurement errors of incidence angle should be smaller than 0.001°, and satellite velocity should be smaller than 0.01 m/s.
Ku-band observations from scatterometers are easier affected by rain due to their shorter wavelength than those collected at C-band, while both frequencies are commonly applied for wind scatterometers. We proposed a support vector machine (SVM) model based on the analysis of Quality Control (QC) indicators of rain screening ability, which has been validated by collocated winds from the Ku-band and C-band scatterometers: OSCAT-2 and ASCAT-B onboard the ScatSat and MetOp-B satellites respectively, together with simultaneous rain rates from the Global Precipitation Mission (GPM) products. Meanwhile, the principle of SVM was addressed for its advantages in the rain-effect correction problem. The established SVM model was evaluated by the testing set not applied in the training procedure. In the verification, where QC-accepted winds from the C-band collocations that are of QC accepted features are applied as the truth, given their low rain sensitivity.
In this research, first, the data sets are increased by including the collocations from the OCSAT-2 and the ASCAT-A scatterometers. The wind speed range applied for the model has been extended based on the recent update of the QC indicator, Joss, which is one of the inputs for the SVM. Then to validate the model, the probability density functions (pdf) and the features due to rains of the inputs due to rain are checked in more detail. The results of the SVM from the new test set, which is not applied in the training procedure, are analyzed specifically, in addition to the statistical features obtained comparing the resulting winds and the truth. Along with the pdf, cumulative density functions (CDF) are also checked. A case study is conducted with simultaneous references from the Medium Infrared advanced imager on board the Himawari-8 satellite.
We conclude that the corrected winds can provide improved quality information for Ku-band scatterometers under rain that can be vital for nowcasting applications, where the effectiveness of optimization methods based on Machine Learning for such problems is proven.
In this research, we also discuss the application of joint-SVMs for better representing wind-rain tangling problem and the possibility of resolving winds and rain rates in such model.
Australia has a vast marine estate and amongst the longest coastlines in the World. Offshore ocean wind measurements are necessary for monitoring for a variety of users such as offshore industries (oil and gas, fisheries etc.) and understanding wind climatology for offshore operations, ship navigation, and coastal management. Australia also has a developing offshore wind energy industry. However, there are few sustained in-situ coastal ocean surface wind measurements around Australia, and either remain largely limited to reefs, jetties and coastal infrastructure or are acquired commercially by offshore industry operators. One exception is the ocean wind record from Southern Ocean Frequency Series (SOFS) flux station (Schulz et al., 2012) several hundred kms offshore south-west of Tasmania.
Sentinel-1 A and B Synthetic Aperture Radar (SAR) satellites regularly map the wider Australian coastal region and provide an opportunity to exploit these data to compile an up-to-date database of coastal wind measurements. Such a high-resolution coastal winds database from SAR also compliments global Scatterometer wind measurements as Scatterometers provide limited data closer to the shore. Two such valuable SAR winds databases already exist in other geographical regions, including NOAA’s operational SAR derived wind products (Monaldo et al., 2016) primarily focused on North America and DTU (Technical University of Denmark) Wind Energy’s SAR winds database (Hasager et al., 2006) with a European focus. With this goal in sight, a regional calibrated coastal SAR winds database has been developed for the Australian region from Sentinel-1 missions.
SAR winds are derived using input data from Sentinel-1 level-2 ocean winds (owi) product (CLS, 2020) sourced from the Copernicus Australasia regional data hub. The owi product contains all the input variables necessary to derive SAR winds including normalised radar cross section (NRCS), local incidence angle, satellite heading, and collocated model wind speed and direction from ECMWF. The algorithm applied for wind inversion is based on a variational Bayesian inversion approach as proposed in Portabella et al. 2002, and the Sentinel-1 Ocean wind algorithm definition document (CLS, 2019) with CMOD5.N as the underlying geophysical model function – GMF (Hersbach et al. 2010). For consistency, the whole Sentinel-1 archive is processed using the same wind inversion scheme and GMF. The resulting spatial resolution of the derived winds is roughly 1 km, like the owi product. The winds are also quality flagged in a systematic manner by using the ratio of the measured to simulated NRCS as a proxy for quality of the winds retrieved and applying various thresholds of median absolute deviation to this ratio. As in-situ measurements are not available in the region, calibration of SAR wind speed is performed against the calibrated (against NDBC buoy wind speeds) Metop-A and B Scatterometer winds database (Ribal et al., 2020) matchups. Calibrated SAR wind speeds are then validated against an independent Altimeter wind speed database (Ribal et al., 2019).
Such a high-resolution coastal winds archive has numerous uses for various applications. The intention is to explore these data in the future for suitability in offshore wind resource assessment, better understanding of coastal wind climatology alongside other regional model hindcast and reanalyses data, and verification of model wind fields, whose quality is a major source of error in wave models.
References
Hasager, C.B., Barthelmie, R.J., Christiansen, M.B., Nielsen, M. and Pryor, S.C. (2006), Quantifying offshore wind resources from satellite wind maps: study area the North Sea. Wind Energ., 9: 63-74. https://doi.org/10.1002/we.190
Hersbach, H. (2010). Comparison of C-Band Scatterometer CMOD5.N Equivalent Neutral Winds with ECMWF, Journal of Atmospheric and Oceanic Technology, 27(4), 721-736.
Monaldo, F. M., Jackson, C. R., Li, X.; Pichel, W. G. Sapper, J., Hatteberg, R. (2016). NOAA high resolution sea surface winds data from Synthetic Aperture Radar (SAR) on the Sentinel-1 satellites. NOAA National Centers for Environmental Information. Dataset. https://doi.org/10.7289/v54q7s2n
Portabella, M., Stoffelen, A., and Johannessen, J. A., (2002). Toward an optimal inversion method for synthetic aperture radar wind retrieval, J. Geophys. Res., 107(C8), doi:10.1029/2001JC000925.
Ribal, A., Young, I.R. (2019). 33 years of globally calibrated wave height and wind speed data based on altimeter observations. Sci Data 6, 77. https://doi.org/10.1038/s41597-019-0083-9
Ribal, A., & Young, I. R. (2020). Calibration and Cross Validation of Global Ocean Wind Speed Based on Scatterometer Observations, Journal of Atmospheric and Oceanic Technology, 37(2), 279-297. https://doi.org/10.1175/JTECH-D-19-0119.1
Schulz, E. W., Josey, S. A., & Verein, R. (2012). First air-sea flux mooring measurements in the Southern Ocean. Geophysical Research Letters, 39(16). http://dx.doi.org/10.1029/2012GL052290
Sentinel-1 Ocean Wind Fields (OWI) Algorithm Theoretical Basis Document (ATBD). (2019). Collecte Localisation Satellites (CLS). Ref: S1-TN-CLS-52-9049 Issue 2.0. Jun 2009.
Sentinel-1 Product Specification. (2020). Collecte Localisation Satellites (CLS). Ref: S1-RS-MDA-52-7441. Issue 3.7. Feb 2020.
EUMETSAT, the European Organisation for Meteorological Satellites, is expanding its scope beyond supporting meteorology, environment and climate monitoring on a global scale, to oceanography. To this end, EUMETSAT operates satellites and data processing systems, including Ocean and Sea Ice Satellite Application Facilities, to provide services which are of high value to ocean monitoring and prediction.
Current EUMETSAT programmes, as well as the European Copernicus programme of which EUMETSAT is a delegated entity, provide operational observations of sea and sea ice. The EUMETSAT marine portfolio includes surface temperature, ocean vector winds, sea surface topography, sea ice parameters, ocean colour and other key marine products.
We will review recent innovations in the EUMETSAT stream of marine satellite data, from the Sentinel-3 constellation, the Sentinel-6 Michael Freilich mission and the EPS/ASCAT mission. Upcoming and planned evolutions responding to the needs of ocean monitoring and prediction users will be presented.
Ocean surface wind vector is of paramount importance in a broad range of applications including wave forecasting, weather forecasting, and storm surge [R1-R5].
The primary remote sensing instruments for wind field retrieval from space is the microwave scatterometer. Although the latter calls for a spatial sampling adequate for several climatological and meso-scale applications, severe limitations to the use of scatterometer products arise when dealing with regional-scale applications. In contrast, the Synthetic Aperture Radar (SAR) achieves a finer spatial resolution and therefore has the potential to provide wind field information with much more spatial details. This can be important in several applications, such as in semi enclosed seas, in straits, along marginal ice zones, and in coastal regions, where scatterometer measurements are contaminated by backscatter from land and ice and the wind vector fields are often recognized to be highly variable. In such regions, wind field estimates retrieved from SAR images would be very desirable.
In this study, the main outcomes related to the Italian Space Agency (ASI) funded project APPLICAVEMARS, whose goal is estimating the ocean surface wind vector using L-, C- and X-band SAR imagery, are presented. The wind processor developed to estimate sea surface wind field from L-band SAOCOM, C-band Sentinel-1A/B and X-band CSK/CSG SAR imagery is described through some thought showcases where:
a) the scatterometer-based Geophysical Model Function is forced using both external (SCAT/ECMWF) and SAR-based wind directions, the latter evaluated by the developed methodologies based on the 2D Continuous Wavelet Transform [6] and Convolutional Neural Network [7] at high spatial resolution (1 km);
b) the wind field is estimated over collocated L-, C- and X-band SAR imagery to study both the aspects related to the GMFs and those dependent on the capacity of the different SAR frequencies to reveal the wind spatial structures.
[R1] Chelton D. B., M. G. Schlax, M. H. Freilich, R. F. Milliff, 2004: Satellite measurements reveal persistent small-scale features in ocean winds. Science, 303, 978- 983, doi:10.1126/science.1091901.
[R2] Lagerloef, G., R. Lukas, F. Bonjean, J. Gunn, G. Mitchum, M. Bourassa, and T. Busalacchi, 2003: El Niño tropical Pacific Ocean surface current and temperature evolution in 2002 and outlook for early 2003. Geophys. Res. Lett., 30, 1514, doi:10.1029/2003GL017096.
[R3] Gierach, M. M. M. A. Bourassa, P. Cunningham, J. J. O'Brien, and P. D. Reasor, 2007: Vorticity-based detection of tropical cyclogenesis. J. Appl. Meteor. Climatol., 46, 1214-1229, doi:10.1175/JAM2522.1.
[R4] Isaksen, L., A. Stoffelen, 2000: ERS-Scatterometer wind data impact on ECMWF's tropical cyclone forecasts. IEEE Trans. Geosci. Rem. Sens., 38, 1885-1892.
[R5] Morey, S. L., S. R. Baig, M. A. Bourassa, D. S. Dukhovskoy, and J. J. O'Brien, 2006: Remote forcing contribution to storm-induced sea level rise during Hurricane Dennis, Geophys. Res. Lett., 33, L19603, doi:10.1029/2006GL027021.
[6] Zecchetto, S., Wind Direction Extraction from SAR in Coastal Areas, Remote Sensing,10(2), 261, 2018 (doi:10.3390/rs10020261)
[7] Zanchetta, A. and S. Zecchetto, Wind direction retrieval from Sentinel-1 SAR images using ResNet, Remote Sensing of Environment, 253, 2021 (https://doi.org/10.1016/j.rse.2020.112178)
Microwave scatterometers play a key role when dealing with operational surface wind measurements. However, their relatively coarse spatial resolution triggered the development of wind retrieval techniques based on synthetic aperture radar (SAR) measurements. Commonly used techniques are based on the normalized radar cross section (NRCS) or radar backscatter and several empirical geophysical model functions (GMFs), originally developed to exploit C-band VV-polarized scatterometer measurements, have been tuned and recalibrated to deal with SAR measurements at different frequencies and polarizations [1]-[3]. The radar backscatter is sensitive to both wind speed and wind direction; hence, the latter must be available to constrain the GMFs [4] when retrieving the wind speed. Such a technique is limited by the fact that errors in the wind direction estimation are propagated into the wind speed estimation [5].
The so-called azimuth cutoff technique, originally proposed by Kerbaol et al. [6] to derive significant wave height (SWH) and sea surface wind speed, does not need neither calibration of the data nor any a priori information on wind direction and, therefore, has recently gained more attention [7].
In this study sea surface wind estimation is addressed using both scatterometer-based GMFs and the azimuth cut-off technique using a data set of Sentinel-1A/B SAR imagery where collocated HY2-A scatterometer wind estimates (on a 25km spatial grid) are available. The proposed rationale aims at proving that SAR NRCS, averaged on a 25km grid, is consistent with the HY2-A NRCS. Two steps will be accomplished: 1) estimating sea surface wind speed using SAR imagery through the scatterometer-based GMFs forced by the HY2-A wind direction and contrasting it with the HY2-A wind speed and with estimates obtained using the azimuth cut-off; 2) estimating the wind direction from the scatterometer-based GMF forced by the azimuth cut-off wind speed and contrasting it with the HY2-A wind direction;
[1] A. A. Mouche et al., “On the use of Doppler shift for sea surface wind retrieval from SAR,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 7, pp. 2901–2909, Jul. 2017.
[2] G. Grieco, F. Nirchio, and M. Migliaccio, “Application of state-of-the- art SAR X-band geophysical model functions (GMFs) for sea surface wind (SSW) speed retrieval to a data set of the Italian satellite mission COSMO-SkyMed,” Int. J. Remote Sens., vol. 36, no. 9, pp. 2296–2312, 2015.
[3] Y. Ren, S. Lehner, S. Brusch, X. Li, and M. He, “An algorithm for the retrieval of sea surface wind fields using X-band TerraSAR-X data,” Int. J. Remote Sens., vol. 33, no. 23, pp. 7310–7336, 2012.
[4] C. C. Wackerman, C. L. Rufenach, R. A. Shuchman, J. A. Johannessen, and K. L. Davidson, “Wind vector retrieval using ERS-1 synthetic aperture radar imagery,” IEEE Trans. Geosci. Remote Sens., vol. 34, no. 6, pp. 1343–1352, Nov. 1996.
[5] M. Portabella, A. Stoffelen, and J. A. Johannessen, “Toward an optimal inversion method for synthetic aperture radar wind retrieval,” J. Geophys. Res., Oceans, vol. 107, no. C8, pp. 1-1–1-13, 2002.
[6] V. Kerbaol, B. Chapron, and P. W. Vachon, “Analysis of ERS-1/2 syn- thetic aperture radar wave mode imagettes,” J. Geophys. Res., Oceans, vol. 103, no. C4, pp. 7833–7846, 1998.
[7] V. Corcione, G. Grieco, M. Portabella, F. Nunziata and M. Migliaccio, “A novel azi- muth cut-off implementation to retrieve sea surface wind speed from SAR imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3331-3340, 2018.
Most of the human world population lives along the coast. Therefore, their lives are heavily affected by the meteorological phenomena that characterize these areas. In that sense, coastal winds play a relevant role. Indeed, for example, the presence of sea breezes, katabatic flows and orographic winds in general, can strongly impact local micro climates and, for example, wind energy potential. Furthermore, they play a fundamental role in the generation of local ocean currents and in the dispersion of air pollutants.
As such, accurate and highly sampled coastal winds observations are of paramount importance to modern societies. In order to pursue this objective, scatterometer-derived wind vectors are potentially very useful, but first the excessively land contaminated radar footprints should be removed, while the low contaminated ones should be corrected by means of a land contribution ratio (LCR) based Normalized Radar Cross Section (NRCS) correction scheme. In addition, the NRCS noise should be carefully characterized in order to properly weight the backscatter NRCS measurements contributing to the wind field retrieval.
An assessment of the noise (Kp) affecting the NRCS measurements of the Seawinds scatterometer (onboard QuikSCAT) is carried out in this study. An empirical method is used to derive Kp (Kp'), which is then compared to the median of the Kp values (Kp'') provided in the Level 1B Full Resolution (L1B) file with orbit number 40651, dated 10 of April 2007, and the main differences are discussed. A sensitivity analysis aiming at assessing the presence of any dependencies with respect to (w.r.t.) different wind regimes, the kind of scattering surface, the scatterometer view and the polarization of the signal is carried out. In addition, the presence of any biases is assessed and discussed. Finally, a theoretical NRCS distribution model is proposed and validated against real measurements.
The main outcomes of this study demonstrate that H-Pol measurements are noisier than those V-Pol, for similar wind speed regimes. In addition, the noise decreases with increasing NRCS values, in line with the expectations. Furthermore, Kp' may largely differ from Kp'', especially for the peripheral measurements, with differences up to 20%. In particular, the Kp values provided for the outermost slices seem to be understimated, especially for what concerns the H-Pol measurements with indices 6 and 7. In addition, the Kp' values estimated over the sea surface are lower than those estimated at all scattering surface types. This trend is not seen for Kp'', for which the differences are almost absent. Furthermore, some inter slice biases up to 0.8 dB are present for H-Pol acquisitions while these only are up to 0.3 dB for V-Pol ones, in both cases increasing with the relative distance between the slices, in line with the general Geophysical Model Function (GMF) sensitivity as a function of incidence angle. These biases have a non-flat trend w.r.t. the acquisition azimuth angle for both polarizations. These small variations may be due to changes in the wind speed and direction distribution for each bin.
The theoretical NRCS distribution proves to be effective. It can be used both for simulation studies and for checking the accuracy of the NRCS noise.
Introduction
With the increased numbers of marine traffic and the man-made objects in the oceans, such as ships, oil platforms and many others, it becomes indispensable to detect these objects to benefit the maritime applications. The abundance of SAR data and its free-open availability encourage the researchers and the industry to exploit this unique remote sensing data to characterize different scatterers in oceans.
SAR wind retrieval processing depends mainly on the Bragg scattering mechanism[1], the scattering of radar pulses caused by centimeter-scale waves on top of long waves. It is possible to relate the Normalized Radar Cross Section (NRCS) values resulting from these small waves to wind speed measurements using geophysical model functions such as CMOD5[2]. Nevertheless, existence of any other type of scattering than Bragg scattering can negatively violate the accuracy of the retrieved wind speed from SAR. The wind speed accuracy matters for many applications, such as offshore wind energy applications. Therefore, quad polarized SAR data can be the key to improve the accuracy of SAR wind retrieval and create quality flags for the SAR wind maps based on different scattering mechanisms occurring in the imaged scene itself.
Different detection approaches can be used to characterize the anomalous pixels in SAR scenes. Constant False Alarm Rate (CFAR) is one of the prominent algorithms utilized to distinguish and detect ships in the ocean based on a threshold value. Nevertheless, this approach depends greatly on the background clutter distribution, subsequently; the CFAR algorithm may have severe problems at heterogeneous areas[3].
Datasets
Many satellite sensors are able to provide different polarization modes. Sentinel-1 (2014- present) collects C-band dual polarization over land worldwide as well as over priority coastal zones . PALSAR-1 (2006-2011) & 2 (2014-now), are side-looking phased array L-band SAR sensors and their Polarimetry mode (PLR) data are available. Last but not least, RADARSAT-1 (1995-2013) & 2 (2007-present) provide standard, wide and fine quad polarization mode. However, some challenges are facing these systems, among these, their technology is complex and the swath of the products is less compared with a single polarized system.
Theory
The diversity in full polarimetric datasets (POlSAR) allows us to complete characterization of the scattered objects rather than the dual and single polarized images, respectively. Each pixel in the full polarimetric scene can be represented in a scattering matrix (S), whereas its components are known as complex scattering measured as amplitudes from different combinations of V and H polarization, as follows:
S= [■(S_HH&S_HV@S_VH&S_VV )]
where S_HV is the backscattered coefficient from horizontal polarization transmission and vertical polarization reception. Other terms are similarly defined.
Scattering matrix gives information about the complete scattering process and can be directly employed to determine a deterministic or a single object scatter, but this is not the case we face near the offshore wind farms. Then, the S is random due to different scatterers that may exist in our study area. Speckle noise filtering is a crucial step in POlSAR processing to define accurately the covariance (C) and coherence matrix (T) while keeping the spatial resolution. Several general principles can be reviewed or implemented to select the best optimal principle, which fits our data. Not to mention all, one and multidimensional gaussian distribution, and the wishart distribution are used to study the distributed scatterers properties through estimation of the C and T matrix. In other words, the C and T matrix are obtained as a vectorization of the S matrix to obtain a new formulation to describe the information contained in the S matrix[4].
Methodology
This study is going to target different full and dual polarimetric datasets for offshore wind farms areas. The methodology of this study is illustrated as shown in the flowchart diagram. The methodology has the validation approach to validate the output of polarimetric decomposition outputs with conventional algorithms such CFAR, Likelihood ration test (LRT) for POlSAR ship detector, and faster Regional Convolution Neural Network(R-CNN) model. Furthermore, the non-Bragg scattering areas will be handled using a developed deep learning (DL) model to refill these areas with proper NRCS values to infer wind speed values.
Expected outcomes
Work is going on to approach the end product of the workflow: wind fields retrieved from SAR with added information about the wind speed quality. This research will definitely benefit many maritime applications and especially the offshore wind energy applications.
Acknowledgements
The PhD project belongs to the Train2Wind network. The Innovation Training Network Marie-Curie Actions: Train2Wind has received funding from the European Union Horizon 2020. Thanks to the European Space Agency for providing us with the full polarimetric datasets.
References
[1] G. R. Valenzuela, “Theories for the interaction of electromagnetic and oceanic waves - A review,” Boundary-Layer Meteorol., vol. 13, no. 1–4, pp. 61–85, 1978, doi: 10.1007/BF00913863.
[2] H. Hersbach, “Comparison of C-Band scatterometer CMOD5.N equivalent neutral winds with ECMWF,” J. Atmos. Ocean. Technol., vol. 27, no. 4, pp. 721–736, 2010, doi: 10.1175/2009JTECHO698.1.
[3] C. Liu, P. W. Vachon, R. A. English, and N. Sandirasegaram, “Ship detection using RADARSAT-2 Fine Quad Mode and simulated compact polarimetry data,” no. February, p. 74, 2010.
[4] Y. Yamaguchi, Polarimetric Synthetic Aperture Radar. 2020.
High resolution accurate coastal winds are of paramount importance for a variety of applications, both civil and scientific. For example, they are important for monitoring some coastal phenomena such as orographic winds, coastal currents and the dispersion of atmospheric pollutants, or for the deployment of off-shore wind farms. In addition, they are fundamental for improving the forcing of regional ocean models and, consequently, the forecasting of some extreme events such as the Acqua Alta often occurring in the Venice lagoon.
Scatterometer-derived winds represent the golden standard. However, their use in coastal areas is limited by the land contamination of the backscatter Normalized Radar Cross Section (NRCS) measurements. Nonetheless, the coastal sampling may be improved if the Spatial Response Function (SRF) orientation and the land contamination are properly considered in the wind retrieval processing chain.
This study focuses on improving the coastal processing of the Seawinds scatterometer onboard QuikSCAT as part of a EUMETSAT study in the framework of the Ocean Sea Ice Satellite Application Facilities (OSI-SAF).
In particular, the analytical model of the SRF is implemented with the aim of computing the so-called Land Contribution Ratio (LCR), which is, by definition, the portion of the footprint area covered by land. This index is then used for a double purpose: a) removing the excessively contaminated measurements; b) implementing a LCR-based NRCS correction scheme for the relatively low contaminated measurements. A second SRF estimate is obtained from a pre-computed Look-Up Table (LUT) of SRFs that are parameterized with respect to (w.r.t.) the orbit time, the latitude of the measurement centroid and the azimuth antenna angle.
Finally, the useful measurements (including those LCR-based corrected) are averaged in order to obtain integrated measurements by beam or view, which are then input in the wind field retrieval processor. Two different averaging procedures, i.e., a box car and a noise-weighted averaging, are implemented
A detailed comparison between the anaytical and the LUT-based SRF models is shown and the consistency of the derived LCR indices is verified against the coastline. A sensitivity analysis of the LCR-based NRCS correction scheme w.r.t. the LCR threshold is carried out. The effects of both averaging procedures on the retrieved winds are carefully analyzed. Finally, the retrieved winds are validated against some coastal buoys and their accuracy is assessed. Preliminary results will be presented and discussed at the conference.
The STP-H8 satellite mission, sponsored by the US Department of Defense (DoD), aims to demonstrate new low-cost microwave sensor technologies for weather applications. H8 will carry the Compact Ocean Wind Vector Radiometer (COWVR) and Temporal Experiment for Storms and Tropical Systems (TEMPEST) instruments to be launched and hosted on the International Space Station (ISS) in December 2021. This presentation will highlight the science and applications that can be enabled and enhanced by measurements from this mission. COWVR and TEMPEST together provide real-time, simultaneous measurements of ocean-surface vector winds (OVW), precipitation, precipitable water, water vapor profile and other atmospheric variables. Because of the ISS’ non-sun-synchronous orbit, these measurements will span across different times of the day for a given location. Similar to the capabilities provided by the ISS-RapidScat (2014-2016), COWVR’s OVW measurements can enhance the research of diurnal and semi-diurnal wind variability and facilitate the inter-calibration of OVW measurements from the sun-synchronous scatterometers. In particular, the OVW measurements from COWVR combined with those from the currently-operating satellite scatterometers make it feasible to estimate diurnal and semi-diurnal cycles of the OVW. Moreover, the simultaneous measurements of air-sea interface and atmospheric variables provided by COWVR and TEMPEST offer a unique opportunity to advance science and applications for weather, climate, and air-sea interaction. The real-time measurements from the mission are amenable for operational applications.
Preventing of high sea state generated in cyclonic conditions is a continous challenge for operational meteorological centers in order to ensure the best wave forecast in open ocean and coastal areas. The accuracy of wind forcing in such extreme conditions is important for capturing the best initial conditions for swell propagation over long distance. Recently the winds from SMAP and SMOS L-band radiometers have shown their ability to observe strong winds that exceed 40 m/s as in cyclonic conditions (Reul et al 2012). The objective of this study is to assess the impact of using radiometers winds on wave forecasting during cyclone seasons in north Atlantic and Pacific ocean and Indian ocean. The work consists of implementing an hybrid wind forcing composed of radiometers winds and model winds. A deep learning techinque is performed to ensure consistent cyclogenesis conditions. Simulations of the wave model MFWAM have been developed for cyclones and hurricanes cases, which caused severe damages after their passage (Bejisa, Lorenzo 2019, Hagibis 2019, etc.). The validation of model results has been performed with altimeters wave data. The results show a positive impact on significant wave height with good reduction of bias and scatter index. We also point out the good consistency between model and Sentinel-1 wave spectra near the cyclone trajectories.
In this study by using 1D ocean mixed layer model we investigated the impact of improved wind forcing on ocean circulation and key parameters such as temperature, currents and surface stress. Further results and comments will be presented in the final paper.
Over the ocean, an altimeter's measurements of normalised backscatter (sigma0) are interpreted as a measure of surface roughness, which is linked empirically with wind speed. However there are many other factors, both instrument specific and environmental, that affect the observed values. Observations at different radar wavelengths record the sea surface roughness at different spatial scales; I utilise the close relationship between backscatter strengths at the two most common altimeter frequencies (Ku-band and C-band) to highlight that other factors are present. Because of their differing sensitivities to scales of roughness, the sigma0 difference, sigma0_0Ku miinus sigma0_C, is not a simple offset, but has a peak for wind conditions around 6m/s. Instrumenting and processing options tender to alter the sigma0 values uniformly across a wide range of values, whereas environmental conditions tend to affect the shape of the relationship, as they are altering the interplay of different roughness scales or their radar-scattering properties.
As there is no universally recognised absolute calibration of altimeter sigma0 values, all instruments require a simple bias to bring them to a common scale. However there is also a dependence on the retracking algorithm used, with the Maximum Likelihood Estimator, MLE3, giving fairly robust estimates, whereas MLE4 and the standard SAMOSA retracker for SAR waveforms show a strong dependence on the inferred mispointing, and so need an adjustment correction for that. It has also been noted that the TOPEX and Jason altimeters in their non-sunsynchronous orbit have additional biases with a period of 58.77 days linked to the degree of solar exposure of the instrument in orbit.
There is an atmospheric effect that changes the sigma0_Ku values, and that is attenuation by liquid rain. This is only significant for a small percentage of observations, but the effect is more pronounced at the Ka-band of AltiKa and the future CRISTAL mission. The most marked environmental effect is due to wave height. At very low winds, a change in wave height of 1m can affect sigma0 by ~0.15 dB, but this causes minimal bias in wind speed estimates due to the low sensitivity to sigma0 of all wind speed algorithms in this regime. There is also an effect at high wind speeds which remains to be accurately quantified. Finally the sigma0_Ku-sigma0_C relationship appears to shift by about 0.15 dB on moving from tropical to polar waters. This confirms a previously reported temperature effect, although the size of the change is a little less than theoretically expected.
All these factors are reviewed within the scope of efforts to remove biases in wind speed algorithms that are either regional in nature or vary with satellite, so as to further efforts to develop a homogeneous altimetric wind speed product.
Figure shows the mean sigma0_Ku-sigma0_C relationships for 4 current altimetric satellites. On the left are the curves for each after initial bias adjustments, showing a divergence in behaviour at very high sigma0 (low winds); on the right are the variations of each curve with sea surface temperature.
Scatterometer data over the ocean are assimilated, at ECMWF and other NWP centres, in the form of ambiguous near surface ocean winds vector information. What scatterometer really measures is the surface radar backscatter that is essentially related to the directional roughness of the sea surface, which is fundamentally driven more by the surface stress caused by the relative motion between the atmospheric wind and the underlying ocean rather than the wind itself. Due to the lack of accurate in-situ surface stress measurements over the ocean to validate and calibrate scatterometer measurements, historically the scatterometer observations have been interpreted and calibrated to (equivalent neutral) wind rather than wind stress.
An ongoing EUMETSAT-funded project at ECMWF is investigating how to further increase the value of scatterometer observations in Numerical Weather Prediction (NWP), by assessing (and implementing) the assimilation of the surface stress, rather than wind components. In a coupled ocean-atmosphere data assimilation system, the ASCAT measurements assimilated as surface stress, can in principle, provide information on the atmospheric wind simultaneously constraining the ocean circulation.
An intermediate assimilation approach known as “stress equivalent winds” (following de Kloe et. al, 2017) is also being explored. This approach includes the sensitivity to air density variations.
A change in the type of assimilated observation variable required an adaptation of the observation operator and of the tangent linear and adjoint codes to enable the minimization in the 4D-Var analysis. New stress and stress equivalent wind observation operators have been developed (together with the tangent linear and adjoint) and tested and integrated into the Integrated Forecasting System (IFS). The error statistic assigned to the observations in the 4D-Var has been revised, a wind dependent error formulation has been characterized to be assigned to ASCAT surface stress observations. NWP observing system experiments assimilating ASCAT observations as either winds, stress or stress equivalent winds are being performed. Also the impact of ocean currents into the assimilation of ASCAT observations is under investigation.
The results of the study will be presented and discussed.
The University of Miami’s Center for Southeastern Tropical Advanced Remote Sensing acquired over 100 Synthetic Aperture Radar (SAR) images of the California Monterey Bay region for the ongoing Coastal Land-Air-Sea-Interaction Project. Approximately 30 of these images include signatures of nonlinear internal waves (NIW). Eight Air Sea Interaction Spar (ASIS) buoys deployed in the region of interest provide field measurements within the SAR image swath. Surface roughness is most commonly thought of as a result of the wind blowing over the ocean surface. SAR senses the short-scale ocean surface roughness by means of Bragg scattering. Although internal waves are subsurface waves, they are visible in SAR data because they modulate the surface currents resulting in increased roughness associated with the leading edge of the internal wave and decreased roughness associated with the trailing edge of the internal wave. Changes in surface roughness alter the drag coefficient which is a key parameter for detecting wind stress. It has been speculated that NIWs can drive wind velocity and stress variance relative to the mean atmospheric flow, suggesting a surface roughness—wind feedback mechanism exists.
Using the SAR images to confirm the presence of NIWs, we estimate the likely time of arrival at an ASIS buoy site if not already intersecting with a buoy at the time of the image acquisition. The ultrasonic anemometer mounted on ASIS provides the three components of velocity needed to derive the turbulent fluctuations of wind velocity, along with the product term u’w’. The Morlet wavelet transform is used to decompose the signal into both frequency and time domain to study the evolution of features. The length scales corresponding to a particular frequency band of enhanced energy patterns found in wavelet plots are compared to SAR measured NIW wavelengths. We take the covariance between u’ and w’ and integrate over frequency to see if this proposed NIW-induced change in wind stress occurs over the same particular frequencies. To assess the contribution of NIWs to the total air-sea flux, we take the cumulative cospectral sum of u’ and w’ (components of momentum flux). A neighboring ASIS buoy not in the path of an NIW is used to represent the background atmospheric flow. We will present early results and discuss the implications NIWs have on the momentum flux and if they should be considered when studying fine-scale ocean-atmosphere interactions.
Swells are long-crest waves induced by storms. They can travel thousands of kilometers and impact remote shorelines. They also interact with local wind generated waves and currents. It has been shown that the presence of swell lowers the quality of the geophysical parameters which can be retrieved from the delay/Doppler radar altimeter data. This, in turn, affects the estimation of small-scale ocean dynamics. In addition, the resolution offered by the delay/Doppler processing schemes, which is approximately 300 m spacing in the along-track direction, does not allow to resolve swells. This work presents a method which demonstrates that Synthetic Aperture Radar (SAR) altimeters show potential to retrieve swell-wave spectra from fully-focused SAR altimetry processed data for the first time, and proposes thus, that SAR altimetry can serve as a source for swell monitoring.
We present the first spectral analysis of fully-focused SAR altimetry data with the objective of studying backscatter modulations caused by swell. Swell waves distort the backscatter in altimetry radargrams by means of velocity bunching, range bunching, tilt and hydrodynamic modulations. These swell signatures are visible in the trailing edge of the waveform, where the effective cross-track resolution is a fraction of the swell wavelength. By locally normalizing the backscatter and projecting the waveforms on an along-/cross-track grid, satellite radar altimetry can be exploited to retrieve swell information. The fully-focused SAR spectra are verified using as reference buoy-derived swell-wave spectra of the National Oceanic and Atmospheric Administration's buoy network. Using cases with varying wave characteristics, i.e., wave height, wavelength and direction, we present the observed fully-focused SAR spectra, relate them to what is known from side-looking SAR imaging systems and adapt it to the near-nadir situation. Besides having a vast amount of additional data for swell-wave analysis, fully-focused SAR spectra can also help us to better understand the side-looking SAR spectra.
The study presents a method and application for estimating series of integrated sea state parameters from satellite-borne synthetic aperture radar (SAR), allow processing of data from different satellites and modes in near real time (NRT). The developed Sea State Processor (SSP) estimates total significant wave height SWH, dominant and secondary swell and windsea wave heights, first, and second moment wave periods, mean wave period and period of wind sea. The algorithm was tuned and applied to the Sentinel-1 (S-1) C-band Interferometric Wide Swath Mode (IW), S-1 Extra Wide (EW) and S-1 Wave Mode (WM) Level-1 (L1) products and also to the X-band TerraSAR-X (TS-X) StripMap (SM) and TS-X SpotLight (SL) modes. The scenes are processed in a spatial raster format and result in continuous sea state fields. However, for S-1 WV, the averaged values for each sea state parameter are provided for each 20 km×20 km imagette acquired every 100 km along the orbit.
The developed empirical algorithm consists of two parts: CWAVE_EX (extended CWAVE) based on widely known empirical approach and additional machine learning postprocessing. A series of new data preparation steps (i.e. filtering, smoothing, etc.) and new SAR features are introduced to improve accuracy of the original CWAVE. The algorithm was tuned and validated using two independent global wave models WAVEWATCH-3 (NOAA) and CMEMS (Copernicus) and National Data Buoy Center (NDBC) buoy measurements. The reached root mean squared errors (RMSE) for CWAVE_EX for the total SWH are 0.35 m for S-1 WV and TS-X SM (pixel spacing ca. 1–4 m) and 0.60 m for low-resolution modes S-1 IW (10 m pixel spacing) and EW (40 m pixel) in comparison to CMEMS. The accuracies of the four derived wave periods are in the range of 0.45–0.90 s for all considered satellites and modes. Similarly, the dominant and secondary swell and wind sea wave height RMSEs are in the range of 0.35–0.80 m compared to CMEMS wave spectrum partitions. The postprocessing step using machine learning, i.e., the support vector machine technique (SVM), improves the accuracy of the initial results for SWH. The resulting accuracy of SWH reaches an RMSE of 0.25 m by SVM postprocessing for S-1 WV validated using CMEMS. Comparisons to 61 NDBC buoys, collocated at distances shorter than 100 km to S-1 WV worldwide imagettes, result into an RMSE of 0.31 m. All results and the presented methods are novel in terms of achieved accuracy, combining the classical approach with machine learning techniques. An automatic NRT processing of multidimensional sea state fields from L1 data with automatic switching for different satellites and modes was also implemented. The algorithms provide a wide field for applications and implementations in prediction systems.
The SSP is designed in a modular architecture for S-1 IW, EW, WV and TS-X SM/SL modes. The DLR Ground Station “Neustrelitz” applies the SSP as part of a near real-time demonstrator service that involves a fully automated daily provision of surface wind and sea state parameters estimates from S-1 IW images of the North and Baltic Sea. Due to implemented parallelization, a fine raster for scene processing can be applied: for example, S-1 IW image with large coverage of around 200 km×250 km can be processed using a raster of 1 km (~50,000 analyzed subscenes) within few minutes.
The complete archive of S-1 WV L1 Single Look Complex (SLC) products from December 2014 until February 2021 was processed to create a sea state parameter database (121,923 S-1 WV overflights with around 3,000 IDs/months, each overflight consisting of 40-180 imagettes, total around 14 Mio S-1 WV imagettes). All processed S1 WV data including derived eight state parameters, quality flag and imagette information (geo-location, time, ID, orbit number, etc.) are stored as ascii and in netcdf format for convenient use. The derived state parameters are available to the public within the scope of ESA’s climate change initiative (CCI).
The validation carried out for the whole S-1 WV archive using CMEMS sea state hindcast for latitudes of -60° < LAT < 60° to avoid ice coverage with around 13,5 Mio collocations resulted in an RMSE of 0.245/0.273 m for wv1/wv2 imagettes, respectively. The SWH accuracy for different wave height domains for wv1/wv2 is as follows: 0.28/0.34 m (SWH < 1.5 m), 0.19/0.22 m (1.5 m < SWH < 3 m), 0.30/0.33 m (3 m < SWH < 6 m) and 0.51/0.55 m (SWH > 6 m). The monthly estimated total RMSE varies form 0.22 m to 0.31 m. These RMSE fluctuations around the mean value are caused by different amounts of acquired storms in individual months. As high waves have a higher RMSE, they increase the total RMSE when their relative percentage in a month is higher: in total, SWH distribution in the worldwide acquired SAR data is SWH < 3 m for ~75% of all cases, 3 m < SWH < 6 m for around 24% and only around 1% for SWH > 6 m and even less than 0.1% for SWH > 10 m. However, SWH > 6 m can reach around 2% for individual months with quadratic impact of the SWH values on RMSE.
The cross validations carried out using CMEMS, WW3 and mixed CMEMS/WW3 ground truth show: in terms of total SWH, in comparison to NDBC data, using only CMEMS ground truth resulted into an accuracy ~3 cm better than when the model function was tuned using WW3 data. This might be consequence of the better CMEMS spatial model resolution of 1/12 degree in comparison to WW3 (1/2 degree, spatially interpolated). The SWH comparison between CMEMS, WW3 and NDBC resulted into an RMSE=0.26 cm for CMEMS/NDBC and an RMSE=0.23 cm for CMEMS/WW3 at NDBC buoy locations. Generally, in terms of SWH, the ground truth noise can be assessed to an error of ~0.25 m. As can be seen, the resulting RMSE of 25 cm for S1 WV brings the results down to the noise level of the ground truth data.
Swells are waves from other ocean areas or generated locally but do not absorb energy from the wind anymore. Swells have longer wavelength than wind waves and can propagate over a very long distance in the ocean. In this study, the in-situ data from NDBC buoys located openly to the southwest are used to determine the potential destination of swells propagating from the westerlies for southern hemisphere. Meanwhile, the CFOSAT SWIM data and ST6 reanalysis data are used to trace back the trajectories and the sources of these swells. Accordingly, we find 25 routes of swells originated from 4 series of ocean storms. To verify the accuracy of these paths, we check the variation of wave parameters in 48 hours before and after SWIM observation. It shows clearly that swells from the southwest have passed through and continue to travel northeastward. The one-dimensional wave spectra of SWIM data and NDBC buoy data are compared which indicates the attenuation of energy. It is shown that magnitude of decaying rate of swell energy increases with the spectral width of initial swell field. In addition, the general rate of increase for peak wavelength is an order of 0.01m/km, which is apparently the spectral width dependent. These are mainly due to the higher degree of dispersion and angular spreading for broader spectra. To quantity the energy that decays due to spherical spreading, the point source hypothesis is used between SWIM observation points and NDBC buoys. Besides, the ST6 reanalysis dataset without considering swell dissipation and negative wind input is compared to the real observation data to help obtain the spherical spreading values from sources to SWIM and from sources to buoys. Linear and nonlinear dissipation rates are calculated according to the air-sea interaction theory and wave-turbulence interaction theory. The result shows that the intensity of dissipation is stronger near the sources (the linear dissipation rate is about 10^(-7) m^(-1)) and decreases in the subsequent propagation.
One of the challenge of future earth system for climate prediction, is to better understand the exchange of momentum, heat and gas fluxes at air-sea interface. In this context, the waves play a key role for the estimate of accurate forcing terms to the upper ocean layers and feedback to the atmosphere. Currently the global wave system of the Copernicus Marine Service (CMEMS) is jointly assimilating Significant Wave Height (SWH) from altimeters and directional wave observations from CFOSAT and Sentinel-1. This leads to significant improvement of integrated wave parameters of sea state, particularly in ocean regions affected by strong uncertainties related to the wind forcing, for instance the Southern Ocean. Among the promising recent developments, we can highlight the synergy between waves and wind observations as provided by CFOSAT mission, which has shown the capacity of retrieving wide swath SWH with good accuracy (Wang et al. 2021). This work gives an overview of using both wide swath SWH and directional wave spectra from satellite missions (CFOSAT, Sentinel-1, HY-2B, HY-2C) in operational wave model. We show that the performance of such assimilation system induces a significant reduction of normalised scatter index of SWH, which is in average smaller than 8%.
We investigated the impact of using both directional wave spectra and wide swath SWH in critical ocean regions suh as the southern ocean and the tropics, with particular attention to consequences on ocean circulation. We also draw up the improvement induced by the assimilation of directional wave spectra on the wind-wave growth and the estimate of wave group speed under unlimited fetch conditions. In this work we examined the complementary use of wave spectra from wave scatterometter SWIM and SAR for better capturing swell propagation. In other respects the persistency of the assimilation of wide swath SWH and directional wave spectra is extended to 4 days in the forecast period, which ensures good reliability for wave submersion warning and marine security bulletins.
Further results concerning the impact of wave directionality on upper ocean mixing has been investigated in the tropics and in the area of Antarctic Circumpolar Current (ACC) ocea area. figure illustrates the zonal mean of eastward component of the current between 146°-149°E of longitudes in Southern ocean from NEMO model simulations and observations from drifters (AOML products). This clearly reveals the improvement of the surface currents when ocean model is coupled with improved wave forcing which uses directional wave spectra and swath SWH.
More discussions and conclusions will be summarized in the final presentation.
Inspired on the work of Cavaleri et al. [2012], where the concept of the so-called swell-wind (i.e., a "low-level wave-driven wind jet" as described in 2010 by Hanley et al.) is discussed, this work-in-progress aims to investigate the spatial variability of wind-wave coupling in a semi-enclosed basin through the analysis of SAR imagery. A global climatology and seasonal variability of wind-wave coupling was computed by the latter authors through the inverse wave age parameter, derived from numerical results of the ERA-40 dataset. They identified areas where, and times when, wind-driven wave conditions (U_10 cosθ / c_p > 0.83) and wave-driven wind regimes (U_10 cosθ / c_p < 0.15) occur, the latter coinciding with the swell pools found by Chen et al. [2002]. They also found that in enclosed seas, where wave-growth (and hence, c_p) is limited by fetch, the wind-driven wave regime is most common, as proposed by Drennan et al. [2003].
In this work, maps of inverse wave age have been computed from wind and wave information derived from Sentinel-1 and TerraSAR-X images acquired over the Gulf of Mexico and validated with buoy observations. Mean wave conditions at the Gulf of Mexico are mild (Hs < 2 m) and mostly driven by the Trade Winds, so that its propagation direction has a strong zonal component and is somewhat normal to these platform's flight direction. This allows for a reliable SAR detection. Under such conditions, inverse wave age lies in the "mixed" regime (0.15 < U_10 cosθ / c_p < 0.83) and its spatial variability appears to be induced mostly by the wind's. However, when the sea surface is forced by the high-wind conditions associated with atmospheric cold surges (November through May) and tropical cyclones (June through November), waves can be as high as 7 m and reach peak period values above 13.5 s, as recorded by NDBC stations 42002 and 42055 in 2020. Spatial variability of the inverse wave age parameter seems then to be dependent mostly on the phase speed and propagation direction of the waves, specially in cases where shorter- and longer-wavelength systems coexist. During these extreme conditions, both wind-driven seas and wave-driven winds have been estimated, indicating areas where the ocean yields momentum into the atmosphere. This study supports the hypothesis that momentum flux can be highly variable (i.e., spatially inhomogeneous) not only near the coast but in the open ocean, as proposed by Laxague et al. [2018].
Global long-term wave climate models are essential to estimate changing climate impacts on future projected sea states, which are crucial for offshore safety and coastal adaptation strategies. In such projections, wave climate models are forced with Global Circulation Model (GCM) wind speed and sea-ice concentration to simulate the wind-wave evolution over extensive time scales. However, GCMs are affected by external forcing and internal variability uncertainties. As such, a model democracy approach, where each model equally contributes to the analysis of the future projected wind-wave climate may result in a high spread in future projection estimates that, if averaged from global statistics, could mask stronger signals in the ensemble best performing models (Knutti et al., 2017). The common practice to overcome such constraints is to use bias-corrected or weighted wave climate model ensembles to estimate the average past and future climate (Morim et al., 2019, Meucci et al., 2020). This work describes a novel observation-based weighting approach based on an in-detail assessment of CMIP6 and CMIP5 derived wave climate model performance using a 33-year calibrated satellite dataset (Ribal and Young, 2019). We compare the wave climate model statistics with collocated satellite measurements at the global level and selected climatic regions (Iturbide et al., 2020). We evaluate the mean climatology, trends and extreme wave estimates of each model. The models are then classified using the Knutti et al. (2017) weighting formula that considers model performance and interdependence. The result is a wave climate ensemble weighted by global observational statistics which should serve as an optimally balanced dataset for future ensemble statistical studies and Extreme Value Analyses ensemble approaches (Meucci et al., 2020).
References:
Iturbide, M., Gutiérrez, J. M., Alves, L. M., Bedia, J., Cerezo-Mota, R., Cimadevilla, E., ... & Vera, C. S. (2020). An update of IPCC climate reference regions for subcontinental analysis of climate model data: definition and aggregated datasets. Earth System Science Data, 12(4), 2959-2970.
Knutti, R., Sedláček, J., Sanderson, B. M., Lorenz, R., Fischer, E. M., & Eyring, V. (2017). A climate model projection weighting scheme accounting for performance and interdependence. Geophysical Research Letters, 44(4), 1909-1918.
Meucci, A., Young, I. R., Hemer, M., Kirezci, E., & Ranasinghe, R. (2020). Projected 21st century changes in extreme wind-wave events. Science advances, 6(24), eaaz7295.
Morim, J., Hemer, M., Wang, X. L., Cartwright, N., Trenham, C., Semedo, A., ... & Andutta, F. (2019). Robustness and uncertainties in global multivariate wind-wave climate projections. Nature Climate Change, 9(9), 711-718.
Ribal, A., & Young, I. R. (2019). 33 years of globally calibrated wave height and wind speed data based on altimeter observations. Scientific data, 6(1), 1-15.
Long-period swells generated in the North and South Pacific frequently hit the shores of low-lying Pacific islands and atolls. The accuracy of wave forecasting models is key to efficiently anticipate and reduce damage during swell-induced flooding episodes. However, in such remote areas, in situ spectral wave observations are sparse and models are poorly constrained. Therefore, Earth Observation satellites monitoring sea state characteristics, represent a great opportunity to improve the forecasting of flooding episodes, and the analysis of wave climate variability.
Here, we present a satellite-driven swell forecast system that can be applied worldwide to predict the arrival of swells. The methodology relies on the dispersive behavior of ocean waves, assuming that the energy travels along great circle paths with a celerity that only depends on its frequency. Satellite data for this analysis are directional wave spectra derived from SWIM acquisitions onboard the CFOSAT-mission (Hauser et al. 2021).
The proposed workflow includes: a) filtering the global-coverage data considering a temporal and geographical criterion (the spatial scale delimits the effective energy source that can reach the target location); b) comparison of wave parameters from CFOSAT-SWIM and partitions from a global wave numerical model for the removal of the directional ambiguity; c) definition of the spectral energy sector that points towards the study site; d) analysis of air-sea fluxes dissipation (Ardhuin et al. 2009); and, e) analytical propagation of the energy bins to forecast the targeted spectral energy over time.
Two examples of application are presented for the Samoa islands, and for the Cantabria coast (Spain). The time evolution of swell systems approaching the sites is evaluated against spectral energy from available in situ wave measurements and numerical model outputs. The results exhibit a good reproduction of the wave fields, proving the flexibility and robustness of the methodology.
The proposed method may be used to track swells across the ocean, forecast the arrival of swells or locate remote storms. For the Small Island Developing States, the output of the methodology can be undoubtedly of great help for stakeholders and decision makers to produce risk metrics
and implement strategies that minimize the vulnerability of these communities to coastal flooding at a very low computational effort.
ACKNOWLEDGEMENTS
This work was supported by the French National Research Agency through the ISblue program, (ANR-17-EURE-0015) and by CNES through the CFOSAT-COAST project.
REFERENCES
Ardhuin, F., Chapron, B., & Collard, F. (2009). Observation of swell dissipation across oceans. Geophys. Res. Lett., 36 , L06607. doi: 10.1029/2008GL037030
Hauser, D. et al., (2021). New Observations From the SWIM Radar On-Board CFOSAT: Instrument Validation and Ocean Wave Measurement Assessment. IEEE Transactions on Geoscience and Remote Sensing 59, 5–26. https://doi.org/10.1109/TGRS.2020.2994372
Ocean surface waves are modified by surface currents. This has strong implications for remote sensing of wind and currents by classical or Doppler scatterometry, especially at high horizontal resolution.
We discuss here different mechanisms of wave modification around a current front. In particular, we compute propagation and dissipation effects using a numerical wave model of wave action conservation. We show that short wind waves, long wind waves and long non-dissipative swell all respond differently to the different current gradient components. The horizontal scales and the degree at which those 3 responses can be coupled to each other is a key to understand this complex response of the wave field to currents.
Detailed knowledge of the shape of the seafloor is crucial to humankind. In an era of ongoing
environmental degradation worldwide, bathymetry data (and the knowledge derived from it) play
a pivotal role in using and managing the world’s oceans. Bathymetric surveys are used for many
research fields including flood inundation, the contour of streams and reservoirs, water-quality studies,
planning the coastal reservoir, and many other applications.
However, the vast majority of our oceans is still virtually unmapped, unobserved, and unexplored.
Only a small fraction of the seafloor has been systematically mapped by direct measurement.
For understanding changes of the underwater geomorphology, regional bathymetry information is
paramount. This sparsity can be overcome by space-borne satellite techniques to derive bathymetry.
With the development of new missions in open-access, space-borne sensors now represent an attractive
solution for a broad public to capture local-scale coastal impacts at large scales.
Only from intermediate water until shore, the linear dispersion relation (1) can be used to estimate
a local depth.
c² =g/h tanh( h/λ ) ⟺ h = λ atanh( c²/gλ ) (1)
in which c is wave celerity, g represents the gravitational acceleration, and λ is the wavelength.
Studies, for example [2], show that wave pattern can be extracted using a Radon transform then they
obtained physical wave characteristics (λ, c) using a 1D-DFT for the most energetic incident wave
direction in Radon space (sinogram).
In this work, we seek a thorough research in signal processing that is contained in Sentinel-2 (ESA/
Copernicus spaceborne optical sensor) images and optimization of this signal. This work is carried out
in the perspective of the production of differential bathymetry, with interest for detection / evaluation
of changes on underwater geomorphology. Identification of such changes has potential applications
in risk analysis related to seismotectonics, submersion, submarine gravitational movements and morphodynamics,
littoral dynamic-related seasonal or extreme event, among others.
Here, regional bathymetries are derived at the test-site of Arcachon, France.
Our approach is based on the calculation of the gradient around each point of the image. This approach
will be a source of improvement of the method [2, 4] and will give us a better estimation of
wave propagation direction and the possibility of dealing with two wave regimes which overlap.
When analyzing directional data, it is often appropriate to pay attention only to the direction of
each datum, disregarding its norm. The von Mises–Fisher (vMF) distribution is the most important
probability distribution for such data[3].
With this novel technique, we extract the wave direction by estimating the parameters of Von Mises-
Fisher distribution from local gradients around each point[1]. Therefore, Sentinel-2 imagery derived
wave characteristics are extracted using a unidirectional radon transform. A discrete fast-Fourier
(DFT) procedure in Radon space (sinogram) is then applied to derive wave spectra. Sentinel-2
time-lag between detector bands is employed to compute the spectral wave-phase shifts. Finally, we
estimate depth using the gravity wave linear dispersion equation (1).
In conclusion, the development of the theoretical model based on von Mises–Fisher(vMF) distribution
is an alternative way to carry out the processing in order to produce coastal bathymetry suggesting
potential improvements respect to previous approaches. Ultimate goals are to be able to make accurate
developments of our approach with the intention of improving the method to detect mixture of
von Mises-Fisher distributions.
Keywords— bathymetry, signal processing, spaceborne imagery
References
[1] Akihiro Tanabe, Kenji Fukumizu, Shigeyuki Oba, Takashi Takenouchi, and Shin Ishii. 2007. ”Parameter
estimation for von Mises–Fisher distributions” Computational Statistics 22(1), 145-157.
[2] Bergsma, Erwin W.J., Rafael Almar, and Philippe Maisongrande. 2019. ”Radon-Augmented Sentinel-2 Satellite
Imagery to Derive Wave-Patterns and Regional Bathymetry” Remote Sensing 11, no. 16: 1918.
[3] Lu Chen, Vijay P. Singh, Shenglian Guo, Bin Fang, and Pan Liu. 2012. ”A new method for identification of
flood seasons using directional statistics” Hydrological Sciences Journal 58(1), 1–13.
[4] Marcello de Michele, Daniel Raucoules, Deborah Idier, Farid Smai, and Michael Foumelis. 2021. ”Shallow
Bathymetry from Multiple Sentinel 2 Images via the Joint Estimation of Wave Celerity and Wavelength”
Remote Sensing 13, no. 12: 2149.
We retrieve significant ocean surface wave heights in the Arctic and Southern Oceans from CryoSat-2 data. We use a semi-analytical model for an idealised synthetic aperture satellite radar or pulse-limited radar altimeter echo power. We develop a processing methodology that specifically considers both the Synthetic Aperture and Pulse Limited modes of the radar that change close to the sea ice edge within the Arctic Ocean. All CryoSat-2 echoes to date were matched by our idealised echo revealing wave heights over the period 2011-2019 (Updated to 2021). Our retrieved data was contrasted to existing processing of CryoSat-2 data and wave model data, showing the improved fidelity and accuracy of the semi-analytical echo power model and the newly developed processing methods. We contrasted our data to in-situ wave buoy measurements, showing improved data retrievals in seasonal sea ice-covered seas. We have shown the importance of directly considering the correct satellite mode of operation in the Arctic Ocean where SAR is the dominant operating mode. Our new data is of specific use for wave model validation close to the sea ice edge and is available at http://www.cpom.ucl.ac.uk/ocean_wave_height/.
In the frame of the second phase of the Copernicus Marine Environment Monitoring Service (CMEMS), starting in 2022, the WAVE Thematic Assembly Centre (TAC), a partnership between CLS and CNES, is responsible for the provision of a near-real-time wave service that started in July 2017. Near-real-time wave products derived from altimetry and SAR measurements are processed and distributed onto the CMEMS catalogue.
This presentation will describe the existing products – along-track Level 3 and gridded Level 4 – and their applications such as near-real-time assimilation in wave forecasting systems, validation of wave hindcasts, etc.
Early 2022, Sentinel-6 will integrate the existing altimetry constellation measuring significant wave heights (SWH) and collocated wind speed. Sentinel-6 will become the reference mission of the CMEMS L3 SWH product, succeeding Jason-3 once it changes orbit. The Sentinel-6 Level-2P and Level-3 processing is under EUMETSAT and CNES responsibility and is operated by CLS. We will describe the processing steps from Level-2 to Level-3 product carried out to produce a homogeneous dataset with regard to other WAVE-TAC altimetry datasets.
The daily gridded Level-4 SWH product will also benefit from the integration of Sentinel-6. The increased spatial and temporal density of measurements will allow a better mapping of the wave heights.
In the frame of this presentation, we will produce a thorough comparison of CMEMS Level-3 & Level-4 SWH products versus in-situ measurements provided by the In-Situ TAC. In particular, we will highlight the changes induced by the integration of the new Sentinel-6 mission in the WAVE-TAC product.
Abstract:
The spectral characteristics of SLC-IW TOPS are significantly different from Strip-map (SM). Due to the burst mode and series of sub-swaths, the target area is scanned for a short period of time, consequently, swath width comes at the expense of azimuth resolution. Significant focusing is required to remove quadratic phase drift and achieve an SLC base-band. The de-ramping effectively eliminates the quadratic drift and restores the baseband data. The ocean circulation parameters are extracted from the echo signal based on data-driven Doppler centroid (DC). The ocean circulation parameters include surface velocity, wave height, and direction swell, while compared with the synergy of benchmark data.
Background:
Due to burst-mode, SLC-IW TOPS differs from SM in terms of schematics, and the system observes in the form of sub-swaths periodically. As a result, the target region is scanned only for a fraction of the burst duration, and thereby the illumination is reduced, and the wide swath comes at the cost of azimuth resolution [1]. Sentinel-1 IW TOPS data preserves quadratic phase term in the azimuth direction which
leads to phase ramps, this needs to be eliminated from the SLC data for the subsequent applications.
In literature, the ocean circulation parameters for SLC-IW data are estimated based on the information provided in the OCN level-2 product, or geophysical interpretation is calculated from satellite orbit parameters [2]. The orbit parameters velocity V and incident angle θ in practical is usually not accurate enough to obtain the DC which fulfills the need for SAR imaging. Therefore, this work estimates the DC and all associated parameters from echo data [3], and all the ocean circulation parameters are data-driven.
Methodology:
To remove the quadratic drift, it is essential to move the spectral component of SLC-IW to the baseband by deramping. The phase term for deramping is defined as:
ϕ(η, τ ) = exp{(−j.π.k_t(τ )) × (η − η_ref (τ ))^2} (1)
whereas, reference time η_ref (τ ), and Doppler centroid rate k_t(τ ) are functions of range samples, while η is zero-Doppler azimuth time. The phase term needs to be multiplied in time
domain with SLC signal S_slc.
S_d(η, τ ) = S_slc × ϕ(η, τ ) (2)
Alternatively, deramping can be done in the SNAP tool using the Sentinel-1 TOPS operator. The flow of the process is given in the figure.
On that account, the Doppler centroid is the essence of this topic. In the literature, the DC has been predicted by conventional OCN product information using DC polynomial information provided in the metadata. We use correlation doppler estimation (CDE) which takes an advantage of azimuth shift and the PRF [4]. This DC history is utilized to retrieve radial surface velocity (RSV) with incident angle and radar frequency information. And with the empirical relationship of RSV, we estimated significant wave height (SWH) [5]. The SWH is the average wave height (from trough to crest) of the highest third of the wave height during the sample period. The comparisons are made with the synergy of benchmark data (measured by OCN product) for the same location, date, and time while using the SLC-IW TOPS product [6].
Results and discussion:
The quadratic drift is removed when phase term ϕ(η, τ) is multiplied with the original SLC image the data moves to the baseband domain. Deramping is done so far to eliminate the quadratic effect of phase term by chirp signal. , we extract ocean circulation parameters for the post-processing. For this, we measure the RSV based on DC information estimated by the CDE method, which perfectly matches with benchmark data. The RSV is in a good match and within the limit of error bounds, while in the core of the stream it reaches up to 2.5 m/s.
The RSV is an associated term for retrieving significant wave heights (Hs), which varies by a few meters. we use dual-polarization VH, which provides a better estimate of Hs than single polarization.
Conclusion:
The designed chirp function de-ramps the data and the result is theoretically correct, whereas the data moves to the baseband. The ocean
circulation parameters are measured and numerical values are compared with benchmark data and found perfectly matched. The numerical merit of comparisons is in good spatial correlation with minimum root mean square error (RMSE) and negligible mean absolute error (MAE).
References:
[1]. De Zan, Francesco, and A. Monti Guarnieri. "TOPSAR: Terrain observation by progressive scans."IEEE Transactions on Geoscience and Remote Sensing, 44.9 (2006): 2352-2360.
[2]. Hansen, Morten Wergeland, et al. "Retrieval of sea surface range velocities from Envisat ASAR Doppler centroid measurements."IEEE Transactions on Geoscience and Remote Sensing, 49.10 (2011): 3582-3592.
[3]. Zou, Xiufang, and Qunying Zhang. "Estimation of Doppler centroid frequency in spaceborne ScanSAR."Journal of Electronics (China),25.6
(2008): 822-826.
[4]. M. Amjad Iqbal, Andrei Anghel, and Mihai Datcu, ”Doppler Centroid Estimation for Ocean Surface Current Retrieval from Sentinel-1 SAR
Data”, IEEE Eu-RAD conference European Microwave week, 2022.
[5]. Pramudya, Fabian Surya, et al. "Enhanced Estimation of Significant Wave Height with DualPolarization Sentinel-1 SAR Imagery." Remote Sensing, 13.1 (2021): 124.
[6]. AElyouncha, Anis, Leif EB Eriksson, and Harald Johnsen. "Comparison of the Sea Surface Velocity Derived from Sentinel-1 and Tandem-X." 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. IEEE, 2021.
In the Sea State community, the literature usually assumes that altimetry waves at 1Hz is dominated by noise (Ardhuin et al. 2019) and most studies tackle this issue by filtering these data up to 50km at least (Quilfen et al. 2018, Dodet et al. 2020). It is also known that the fading noise has a real impact on correlated errors at these scales (Quartly et al. 2019), with an impact on SLA estimates that can empirically be reduced by methods described in Zaron et al. or Tran et al. 2021.
Is this presentation, we propose to process the 20Hz resolution altimetric data and to look deeper into this HF content. After a 5Hz compression, we analyze the frequential content and their geographical signatures. We also take a particular care to data selection, an essential step for validation purpose (as illustrated in Quartly et al. 2019).
The analysis is multimission oriented. LRM missions are processed with the innovative adaptive retracking (Tourain et al. 2021). It also deals with Doppler SAR altimetry observations, also impacted by the sea sate structures (Moreau et al. 2021). The study focuses on Jason3 thanks to SALP CNES project, ENVISAT, in the frame of the innovative Fdr4ALT project and CFOSAT (Hauser et al. 2020).
To better characterize the high frequency signal, we leverage the spectral information (direction and wavelength and partitions) provided by the CFOSAT mission, as well as Sentinel 1 radar imaging. A discussion is carried to determine how these altimetric products could become the next generation of CMEMS WaveTAC products. It also explores the contamination effects on SLA estimates, mainly in the bump frequency, extensively described in Dibarboure et al 2014.
The processing of these new 5Hz L2P products is presented. Their quality over coastal areas is illustrated and demonstrated. Their added value offshore is highlighted and offered to discussion with other international teams (for assimilation and/or validation, climate or coastal communities…) via demonstration products available on Aviso web site. Give it a try!
Sea state is a key component of the coupling between the ocean and the atmosphere, the coasts and the sea ice. Understanding how sea state responds to climate variability and how it affects the different compartments of the Earth System, is becoming more and more pressing in the context of increased greenhouse gas emission, accelerated sea level rise, sea ice melting, and growing coastline urbanization.
A new multi-mission altimeter product that integrates improved altimeter retracking and inter-calibration methods is being developed within the ESA Sea State Climate Change Initiative project. This effort will provide 30 years of uninterrupted records of global significant wave height. The 30 years represents the minimum duration required for computing climatological standards
following the World Meteorological Organization recommendation (WMO, 2015). In recent years, several
authors (e.g. Young et al., 2011; Young and Ribal, 2019; Timmermans et al., 2020) have computed the
trends in both the mean and extreme Hs using calibrated data from multi mission altimeter records. Whether these trends are the signature of anthropogenic climate change or the signature of natural variability is not known. Indeed, the atmosphere exhibits variability on a time scale comparable to the length of the satellite era and is therefore likely to hide the anthropogenic signal. Using the ECMWF ERA-5 reanalysis and focusing on North Atlantic region, we indeed show that the trends in winter-mean Hs computed over the satellite altimetry era are mostly associated with the atmospheric variability on the altimetry era time scale. Because the winter Hs variability in the North Atlantic is tightly linked with the overlying sea sevel pressure (SLP) winter variability, we extract the SLP modes of variability responsible for the altimetry era Hs winter trends in three regions (the Norwegian sea, the Mediterranean sea and the south of Newfoundland sea) where the Hs trends are significant.
In order to investigate the contribution of natural climate variability in these Hs trends, we analyzed SLP outputs obtained from the Community Earth System Model version 2 Large Ensemble (Lens2). Our analysis reveals that the magnitude of the SLP slope linked with internal variability becomes comparable to the magnitude of the slope linked with anthropogenic climate change for ~ 65 years of data i.e. around 2060, considering that 1992 (ERS-1 launch) is the beginning of the continuous altimetry era. This suggests that Hs modification associated with anthropogenic climate change of the atmospheric circulation will not be detectable in satellite altimetry trends before several decades.
IMOS (Integrated Marine Observing System) OceanCurrent (oceancurrent.imos.org.au) is a marine data visualisation digital resource that helps communicate and explain up to date ocean information around Australia. The information offers benefit to a broad range of users including swimmers, surfers, recreational fishers, sailors, and researchers using data collected from satellites, instruments deployed in the ocean, and accessible model outputs. The platform includes near-real-time data for sea surface temperature, ocean colour, and sea level anomaly from various satellite missions and in-situ instruments such as Argo, current meters, gliders, and CTDs etc. Until now, ocean surface wave information, both from in-situ wave rider buoys and satellite missions, has not been captured in OceanCurrent.
Australia has a growing network of moored coastal wave rider buoys. Network gaps are being identified (Greenslade et al., 2018, 2021) and filled, and new low-cost wave buoys are also being tested and deployed alongside traditional systems, further increasing the in-situ surface wave data captured. The publicly available national wave data network consists of approximately 35+ platforms operated by several different State and Commonwealth agencies and industry-contributed data (Greenslade et al., 2021). As it can be challenging and time-consuming to gather wave observations from various sources for large scale national or regional studies, IMOS AODN (Australian Ocean Data Network) Portal has strived to build an archive (and a near real time feed) of available national wave buoy observations. The AODN service is also being expanded by adding more platforms and by improving the meta-data of buoy record. Historical and near-real-time national wave data from a substantial set of wave buoys can now be easily accessed.
International satellite remote sensing radar altimeter and synthetic aperture radar (SAR) missions are also providing open data of surface wave observations globally. Also, CFOSat SWIM instrument has been providing global surface wave spectra measurements since launch in 2018 (Hauser et al., 2021). Using these valuable resources, Australia has developed, and continues to maintain, long-term multi-mission databases of calibrated wave height observations from radar altimeters (Ribal et al., 2019) and long-wave spectra from selected SAR missions (Khan et al., 2021). Some of these databases are also providing near real time feeds that can be exploited to gather up to date wave information.
An experimental national ocean surface waves product is under development for IMOS OceanCurrent Portal by integrating surface waves information from coastal buoys and satellite missions. As both radar altimeter and SAR satellites are polar orbiting with relatively narrow swaths (~10-20 km) over open ocean, during any short time window at best there are only a few along-track satellite measurements available. To convey a full representation of the wave field, background wave information from Bureau of Meteorology’s (BoM) AUSWAVE initialisation time step (t0) is shown. Surface wave maps are created at 2-hourly time steps with t0 as the central time showing AUSWAVE significant wave height and peak wave direction. Coastal buoy observations including significant wave height, mean wave direction, mean wave period, and directional spread within t0 +/- 3 hours, radar altimeter significant wave height within t0 +/- 1 hour, and peak wave direction and mean period extracted from SAR spectra within t0 +/- 30 mins are displayed, when available. Monthly videos from 2-hourly surface wave maps are also created to have a synoptic record of wave field propagation from ocean to the coast. The surface wave map archive currently spans 2021 and is planned to contain up to date (up to a few hours delay) surface wave information.
Once available on OceanCurrent, the hope is that this product will enable the wider community of recreational marine users and researchers to extract relevant surface wave information as needed and provide direct societal benefits by providing a national view of easily available and integrated surface wave information.
A sample image of the surface waves product is attached with the abstract to help reviewers, but it will likely be unavailable for the online abstract version (if accepted) as advised by the symposium organisers.
References
Greenslade, D. J. M., Zanca, A., Zieger, S., and Hemer, M. (2018): Optimising the Australian wave observation network. J. South. Hemisphere Earth Syst. Sci., 68, 184–200, https://doi.org/10.22499/3.6801.010.
Greenslade, D. J. M., Hemer, M. A., Young, I. R. & Steinberg, C. R. (2021). Structured design of Australia’s in situ wave observing network, Journal of Operational Oceanography, doi: 10.1080/1755876X.2021.1928394
Ribal, A., Young, I.R. (2019). 33 years of globally calibrated wave height and wind speed data based on altimeter observations. Sci Data 6, 77. https://doi.org/10.1038/s41597-019-0083-9
Khan, S. S., Echevarria, E. R., & Hemer, M. A. (2021). Ocean swell comparisons between Sentinel-1 and WAVEWATCH III around Australia. J. Geophys. Res: Oceans, 126, e2020JC016265. https://doi.org/10.1029/2020JC016265
D. Hauser et al., (2021). New Observations from the SWIM Radar On-Board CFOSAT: Instrument Validation and Ocean Wave Measurement Assessment," in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 5-26, Jan. 2021, doi: 10.1109/TGRS.2020.2994372.
The wave spectrum is a representation of the state of the ocean surface from which many parameters can be deduced: significant wave height, peak parameters of dominant waves, directional parameters, etc... For more than 30 years, Synthetic Aperture Radars allow their routine montoring far from the coast in all surface conditions (through clouds and despite night), where buoys cannot be deployed. These measurements benefit from scientific efforts that now make it a reliable measurement technique. Sentinel-1 constellation is one of them and operates since 2016. However known limitations, including wave blurring caused by the azimuth cut-off, limit the performance of wave spectra measurement to long swells.
SWIM is a new rotating Radar onboard the Chinese-French CFOSAT satellite dedicated to directional wave spectra measurement. Not suffering from the cut-off limitations, new horizons and perspectives for synergies are opened in terms of spectral limit and directionality.
Sentinel-1 wave spectra measurements are limited to Wave Mode acquisitions, only available over deep ocean basins and away from the North-East Atlantic Oean. SWIM measurements offer hundreds of co-located measurements over these regions, but also extends the coverage to closed seas worldwide and European waters. Other complementarities exist in wave measured wavelengths : Sentinel-1 extends to long swell (up to 800 m wavelength) while SWIM shows greater ability to measure wind sea components (close or below 50 m).
Those complementarities are assessed at different levels and with different comparisons methodologies. First, partition integral parameters are compared between SWIM Level-2P products or S1 Level-2 products and numerical wave model outputs alone. Second, dynamical co-locations are performed between S1 and SWIM, using cross-overs given by Level-3 spectral produtcs. These measurements, also referred to as Fireworks, allow to dramatically increase the number of co-located points and better inter-compare.
These new performances have applications in data assimilation and prospects for new products such as Stokes drift, a first-estimated spaceborne measurement.
Harmony consists of two satellites that will fly in a constellation with one of the Sentinel-1 satellites. The two Harmony satellites carry a passive instrument that receives signals which are transmitted by Sentinel-1 and reflected from the surface. The full system therefore benefits from two additional lines-of-sight, which enables the vectorization of high-resolution wind stress and surface motion. It also provides a better spectral coverage and therefore a better constraint on the long-wave spectrum.
This presentation will discuss the mapping of ocean wave spectrum into a bi-static SAR spectrum. This work will rely on different and coherent approaches. First, We will present a theoretical analysis extending the historical mono-static closed-form equation (Hasselmann and Hasselmann [1991], Krogstad [1992], Engen and Johnsen [1995]) and relying on bistatic transfer functions and bi-static configuration. This approach allows an easier understanding and interpretable analysis of the bi-static SAR mapping of ocean wave spectra.
This theoretical closed-form equation will be exploited and compared to numerical instrumental simulations which mimics as physically as possible the full observation chain of a prescribed ocean scene. Despite the high computational cost, these simulations offer a much larger panel of possibilities to look at instrumental and sea-state-parameter impacts on the resulting SAR spectra. The bi-static specifications will be emphasized and compared to the mono-static equivalent configuration in order to demonstrate the benefits of Harmony in terms of wave retrieval.
To corroborate the findings of the combined theoretical and numerical analysis, we will rely on existing Sentinel-1 data acquired on the same ocean scene at a slightly different time during consecutive ascending and descending passes. These co-located mono-static acquisitions are not fully equivalent to a multi-static SAR configuration as Harmony, but are representative of, and give insight into, the valuable azimuthal diversity gain to better retrieve the directional properties of ocean wave spectrum.
The three approaches previously presented will show that the additional lines-of sight benefits the retrieval of the wave spectrum. The bi-static companions are sensitive to waves traveling in different directions, which makes the RAR spectral analysis of high interest to the study of wind wave characteristics. The ratios of the intensities vary with direction and wave number and therefore bi-static companions provide new means to help retrieve the directional surface-wave spectrum. The SAR transform is more complex. Still, compared to the mono-static transform the bi-static transform displays improved capabilities, particularly in terms of a larger spectral coverage.
A microwave range transponder has been operating at the CDN1 Cal/Val site on the
mountains of Crete for about 6 years, to calibrate international satellite radar altimeters
in the Ku-band. This transponder is part of the European Space Agency Permanent
Facility for Altimetry Calibration, and has been producing a continuous time series of
range biases for Sentinel-3A, Sentinel-3B, Jason-2, Jason-3 and CryoSat-2 since
2015. As of 18-Dec-2020, the CDN1 transponder has allowed calibration of the new
operational altimeter of Sentinel-6A satellite as it flies in tandem with Jason-3. This
work investigates range biases derived from the long time series of Jason-3 (and
subsequently that of Sentinel-6 since both follow the same orbit) and tries to isolate
systematic and random constituents in the produced calibration results of the
transponder. Systematic components in the dispersion of transponder biases are
identified as of internal origin, coming from irregularities in the transponder instrument
itself and its setting, or of external cause arising from the altimeter, satellite orbit,
Earth’s position in space, geodynamic effects and others. Performance characteristics
of the CDN1 transponder have been examined. Draconic harmonics, principally the 58-
day period, play a significant role in the transponder results and create cyclic trends in
the calibration results. The attitude of the satellite body as it changes for solar panel
orientation contributes an offset of about 7 mm when yaw rotation is off its central
position, and the atmospheric, water mass and non-tidal ocean loadings are
responsible for an annual systematic signal of 10 mm. At the time of writing, all other
constituents of uncertainty seem random in nature and not significantly influential,
although humidity requires further investigation in relation to the final transponder
calibration results.
The Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) satellite mission is a climate mission led by the UK Space Agency (UKSA) and delivered by the European Space Agency (ESA). One of the main objectives of TRUTHS is to provide in-orbit cross-calibration traceable to SI standards for Earth observation satellite missions. The possibility of in-orbit cross-calibration will enhance the performance of calibrated sensors and allow up to a tenfold improvement in data reliability compared to existing data. The ability to obtain more reliable data will lead to improved future climate models, which are crucial for decision-making and action against climate change.
In the development process of the TRUTHS mission, an End-to-End Mission simulator is used to reproduce different mission configurations and evaluate their performance. As part of this simulator, a scene generator module is required that simulates the TRUTHS Top Of Atmosphere (TOA) radiances for several different land surface types. Reflectance cubes from NASA’s airborne imaging spectroradiometer AVIRIS-NG were used as input data for six different land surface types (ocean, agriculture, forest, snow/ice, clouds and desert). First, an extrapolation of the AVIRIS-NG spectral range to the UV range was performed in dependence of the land surface type. A spatial resampling to 2000 pixel across-track and a spectral resampling to 1nm intervals was necessary to meet the requirements of TRUTHS. Further, a simulated TRUTHS sensor file was generated and used as an input to ATCOR to compute a simulated TOA radiance of TRUTHS. ATCOR is a software product that uses the MODTRAN-5 radiative transfer equation to simulate at-sensor radiances. For the latter step, different solar zenith angles were considered to provide a minimum and maximum solar zenith angle per scene. For validation purposes, a cross-comparison was performed using TOA radiances of AVIRIS-NG and the simulated TRUTHS TOA radiances. The final products are delivered in NetCDF file format and will be used as target scenes to be observed by the TRUTHS sensor and hence to test and evaluate its performance.
IASI radiometric error budget assessment and exploring inter-comparisons
between IASI sounders using acquisitions of the Moon
IASI (Infrared Atmospheric Sounding Interferometer) instruments on-board METOP polar orbiting meteorological satellites are currently used for climate studies [1-3]. IASI-A launched in 2006, displayed 15 years of stable performances and is no longer active. There are still two operational IASI instruments: one on-board METOP-B (launched in 2012) and one on-board METOP-C (launched in 2018). Efforts are continuously being made by CNES to improve IASI data quality during their whole lifetime. For example, the methodology for the spectral calibration was improved for IASI-A and a recent reprocessing was performed by EUMETSAT in order to obtain continuous homogeneous data series for climate studies. Moreover, the on-board processing non-linearity corrections for both IASI-A and IASI-B instruments were improved in 2017, reducing the NedT error in the spectral band B1.
IASI is the reference used by the GSICS (Global Space based Inter-Calibration System) community for inter-comparisons between infrared sounders to improve climate monitoring and weather forecasting. The objective here is to present the errors sources which impact the IASI radiometric error budget considering the uncertainties related to the knowledge of the internal black body (e.g. temperature and emissivity), the non-linearity correction and the scan mirror reflectivity law. This work is performed in the framework of the collaboration with the GSICS community to ensure a stable traceability of infrared sounders radiometric and spectral performances.
Moreover, Moon data are regularly acquired since 2019 by IASI-B and IASI-C to study the possibility to perform absolute and relative calibrations by using these lunar observations. The Moon is often used in the visible domain as a calibration source for satellite instruments, but, until now, it is not the case in the thermal infrared domain. In the framework of this study, a dedicated radiometric model was built to simulate and compare IASI lunar measurements. Inter-comparisons between IASI-B and IASI-C Moon acquisitions showed really promising results with an accuracy of ≤ 0,15K. These results are comparable to the performance of IASI instruments inter-comparisons based on selected homogeneous Earth View spectra.
[1] Marie Bouillon and co, Ten-Year Assessment of IASI Radiance and Temperature, Remote Sens., 12(15), 2393 (2020)
[2] Simon Whitburn and co, Trends in spectrally resolved outgoing longwave radiation from 10 years of satellite measurements, npj Climate and Atmospheric Science 4, article number 48 (2021)
[3] Nadia Smith and co, AIRS, IASI, and CrIS retrieval records at climate scales: an investigation into the propagation of Systematic Uncertainty, Journal of Applied Meteorology and Climatology, vol. 54, issue 7, (2015)
SI-Traceable satellites (SITSat) provide highly accurate data with an unprecedented absolute calibration accuracy robustly tied to international system of units, the SI. This increased accuracy and SI traceability helps to improve the quality and trustworthiness of the measurements performed by the SITSat itself and those of others through in-orbit cross-calibration, enabling the prospect of litigation quality information. Such a system can have direct benefits for the net-zero agenda as it reduces the prospect of ambiguity and debate through the ability to understand and remove biases in a consistent and internationally acceptable manner, hereby creating harmonised interoperable virtual constellations of sensors to support decision-making and monitoring of climate change mitigation strategies accounting for emissions and sinks. The very high accuracy capabilities of SITSats can also provide a benchmark from which change can be monitored so that the intended success of our climate actions can be identified and quantified in as short a time as possible.
In this poster we consider how the ESA Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) mission can contribute to and investigate some of the benefits of the high calibration accuracy and hyperspectral nature of a mission like TRUTHS in relation to the climate emergency. TRUTHS is a climate mission, led by the UK Space Agency, which is being developed as part of the ESA Earth Watch programme to enable, amongst other things, in-flight calibration of Earth observation (EO) satellites. TRUTHS will establish an SI-Traceable reference in space with an unprecedent calibration accuracy of 0.3% (k=2), over the spectral range of 320 nm to 2400 nm at a ground spatial resolution of up to 50 m.
In particular, we look at how TRUTHS might help to anchor sensor measurements used to estimate sinks and sources of GHG emissions, including ocean and land biology, and track land use changes. We also explore how TRUTHS might help to constrain atmospheric measurements by improving the quality of ancillary information used in the retrievals e.g. aerosols, surface albedo, as well as bias removal in the sensor radiometric gains through in-orbit calibration, enabling harmonised constellations of satellites in support of the stocktake.
For the stocktake, as many GHG monitoring satellites have large field of views and also are anticipated to sit in a variety of orbits, we explore in some detail the impact of solar illumination and view angle on the calibration process. Here we evaluate how TRUTHS can estimate the bidirectional reflectance distribution function (BRDF) of the surface, particularly of typical desert calibration targets, and the impact on uncertainty.
Additionally, regardless of the main purpose of TRUTHS in this context as a ‘metrology laboratory in space’ and calibration reference, we also explore how TRUTHS itself can perform or at least contribute to some of the mitigation-related measurements. Even though the mission is not explicitly designed for many short-term climate action related activities, by virtue of being hyperspectral, of high accuracy and relatively high spatial resolution it can still make a positive contribution and improve spatial and temporal coverage of monitoring. As an example of such measurements, we chose the detection of methane point emitters (e.g. fossil fuel extraction and use facilities, agriculture facilities and landfills), one of the top priorities among the mitigation actions for the next decade.
In summary, the poster will explore the impact and contribution of a SITSat like TRUTHS to the climate action agenda through both direct observations and derived information, improvement of retrieval algorithms and interoperability and accuracy of existing sensors specifically designed for particular variables such as GHG satellites, those monitoring land and ocean biological properties serving as sinks and/or their impact on emissions. The climate emergency requires policy makers and society to have confidence in the information in order to pursue the necessary actions and this needs to be underpinned by data of rigorous and unchallengeable uncertainty.
Establishing an end-to-end uncertainty budget is essentially required for all ECVs of ESA’s Climate Change Initiative (CCI). The reference guide for expressing and propagating uncertainty consists of the GUM and its supplements, which describe multivariate analytic and Monte Carlo methods. The FIDUCEO project demonstrated the application of these methods to the creation of ECV datasets, from Level 0 telemetry to Level 1 radiometry and beyond. But despite this pioneering work, uncertainty propagation for ECVs is challenging. Firstly, many retrieval algorithms do not incorporate the use of quantified uncertainty per datum. Using analytic methods for propagating uncertainty requires completely new algorithmic developments while applying Monte Carlo methods is usually straightforward but leads to proliferation of computational and data curation resources. Secondly, operational radiometry data are usually not associated with a quantified uncertainty per datum and error correlation structures between data are not quantified either. Deriving this information from original sensor telemetry and an according harmonisation with respect to an SI traceable satellite (SITSAT) reference is a future task.
Nevertheless, it is feasible to explore and prepare ECV processing for the use of uncertainty per (Level 1) datum and error correlation structures among data already now, based on instrument specifications and simplifying assumptions. For the Land Cover ECV of the CCI we developed a Monte Carlo surface reflectance pre-processing sequence, which considers three most significant effects: errors in satellite radiometry, errors in aerosol retrieval, and errors in cloud detection. Error correlations between radiometric data are considered using a simplified correlation matrix with a constant correlation coefficient. Such simplified correlation structure can take account of uncorrelated random noise as well as common systematic errors arising, e.g., from radiometric calibration, which affect climate data sets even in the long term, while all other forms of random error average out sooner or later. Errors in aerosol retrieval are considered in a similar way. Errors in cloud detection affect the land cover classification directly. Omission of clouds degrades the accuracy of the ECV dataset whereas false commission reduces its coverage statistics. Our Monte Carlo pre-processing sequence can simulate random and systematic cloud omission and commission errors.
In this contribution, we explain the concept of our Monte Carlo processing sequence and its computational implementation and present proof of concept by verifying the statistical properties of the created surface reflectance ensemble.
The Global Space-based Inter-Calibration System (GSICS) is an initiative of CGMS and WMO, which aims to ensure consistent accuracy among satellite observations worldwide for climate monitoring, weather forecasting, and environmental applications. To achieve this, algorithms have been developed to correct the calibration of various instruments to be consistent with community-defined reference instruments based on a series of inter-comparisons – either directly by the Simultaneous Nadir Overpass (SNO) or Ray-Matching approach – or indirectly using Pseudo Invariant Calibration Targets (PICTs), such as the Moon, desert sites or Deep Convective Cloud as transfer standards. In the former approach contemporary satellites are tied to current state-of-the-art reference instruments, while heritage satellites need to rely on older references. The invariant target approach relies on their characterisation by counterpart reference instruments and is typically applied in the Reflected Solar Band.
The 2020s will see the launch of a new type of satellite instrument, whose calibration will be directly traceable to SI standards on orbit, referred to here as SI-Traceable Satellites (SITSats). Examples include NASA’s CLARREO Pathfinder, ESA’s TRUTHS and FORUM, and the Chinese Space Agency’s LIBRA. The first of these will carry steerable VIS/NIR spectrometers, which will allow corresponding GSICS products to be tied to an absolute scale.
This presentation outlines two approaches being developed to exploit these SITSats. Firstly, by direct comparison of their observations with those of current GSICS reference instruments using Ray-Matching to ensure equivalent viewing conditions over simultaneous, collocated scenes. Secondly, by charactering the current Pseudo Invariant Calibration Targets including Deep Convective Clouds and desert sites, including their BRDF and spectral signature. The challenge of propagating uncertainties through the inter-calibration algorithms to achieve a full traceability chain will be discussed.
To optimise the benefits of such a SITSats requires GSICS to prioritise which reference instruments or PICTs are to be characterised, and close cooperation with the SITSat operators to ensure sufficient acquisitions are available to fully characterise them within the mission lifetime. Ultimately, tying GSICS products to an absolute scale would provide resilience against gaps between reference instruments and drifts in their calibrations outside their overlap periods, and allow construction of robust and harmonized current and historical data records from multiple satellite sources to build Fundamental Climate Data Records, as well as more uniform environmental retrievals in both space and time, thus improving inter-operability.
TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio-Studies) is a UKSA-led climate mission that is in development as part of ESA Earth Watch programme, which has the aim of establishing of high accuracy SI-traceability on-orbit to improve estimates of the Earth’s radiation budget and other parameters by up to an order of magnitude. The high-accuracy that its SI-traceable calibration system enables (target uncertainty of 0.3 % (k=2)) allows TRUTHS observations to be used both directly as a climate benchmark and as a reference sensor for upgrading the calibration of other sensors on-orbit.
In order to the assess the proposed instrument design against the strict uncertainty requirements, a rigorous and transparent evidence-based uncertainty analysis is required. This paper describes a metrological analysis of the radiometric processing of the TRUTHS L1b products derived from Top of the atmosphere observed photons, including an analysis of the On-Board Calibration System (OBCS) performance. At the heart of the OBCS is the Cryogenic Solar Absolute Radiometer (CSAR) which provide the primary traceability to SI. The OBCS mirrors concepts used in national standards laboratories for measurement of optical power and spectral radiance and irradiance and in TRUTHS links the calibration of the Hyperspectral Imaging Spectrometer (HIS) to SI.
The analysis follows the framework outlined in the EU H2020 FIDelity and Uncertainty in Climate data records from Earth Observations (FIDUCEO) project, which uses a rigorous GUM-based approach to provide uncertainties for Earth Observation products and contains a number of documentational and visualisation concepts that aid interrogation and interpretation. Initially the measurement functions for each instrument on board TRUTHS are defined, and a corresponding ‘uncertainty tree diagram’ visualisation produced. From this, error effects are identified and described using ‘effects tables’, which document the associated uncertainty, sensitivity coefficients and error-correlation structure, providing the necessary information to propagate the uncertainty to the final product. Combining this uncertainty information allows for the total uncertainty of a quantity (e.g. radiance) to be estimated, at a per-pixel level, which can then be analysed based on the source of the uncertainty or its error-correlation structure (e.g., random, systematic, etc.).
An extension of this analysis is End-to-End Metrological Simulator (E2EMS) for the TRUTHS L1b Products, adding both a forward model of the TRUTHS sensor (input radiance to measured counts), and a calibration model (measured counts to calibrated radiance), to understand the product quality an contributions from the proposed schemes and algorithms.
The CEOS (Committee on Earth Observing System) Cal/Val Portal website: https://calvalportal.ceos.org/ serves as the main online information system for the CEOS Working Group on Calibration and Validation enabling exchanges and information sharing to the wider Earth Observation community within CEOS and beyond.
It provides users with connections to good practices, references and documentations as well as reference data and networks and is a source for reliable, up-to-date and user-friendly information for Cal/Val tasks. The portal facilitates data interoperability and performance assessment through an operational CEOS coordinated and internationally harmonized Cal/Val infrastructure consistent with QA4EO principles.
It is possible to access the various contents within the portal as a guest or by logging-in as a member. As a registered user you gain the rights to view dedicated sections, download/upload documents and contribute to the portal growth (e.g.: access to the document repository, specific datasets, terms and definitions area, etc.). News, announcements or novel content are highlighted on the home page and in the twitter feed (@CEOS_WGCV), providing fresh information from and for the community.
The CEOS WGCV page is the entry point for all the CEOS Working Group on Calibration and Validation (WGCV) subgroups, and hosts the IVOS and the CEOS SAR subgroup websites. The Cal/Val Sites page offers an overview, by a linked tree diagram, of the test sites used for calibration and validation activities. The sites are grouped according to WGCV subgroup domain and applications. The CEOS endorsed sites and the reference networks are distinguished by different colors.
In the Projects section relevant Cal/Val project links are provided subdivided by disciplines: Atmosphere, Land, Cryosphere and Ocean. In the Campaigns section several campaign websites links are provided and categorized, as for projects. The list of Cal/Val software tools and services are presented in the Tools page with corresponding links and descriptions. The portal hosts in the Cal/Val Data section the Modulation Transfer Function (MTF) Reference Dataset – with a dedicated webpage with reference papers and reference imagery, and the Speulderbos forest field database.
The Cal/Val Portal is based on Liferay®, an open source web platform that supports content management and other collaborative tools.
The Fiducial Reference Measurement (FRM), Fundamental Data Record (FDR) and Thematic Data Product (TDP) activities all have a common aim – to provide long-term satellite data products that are linked to a common reference (ideally the SI), with well-understood uncertainty analysis, so that observations are interoperable and coherent. In other words, measurements by different organisations, different instruments and different techniques should be able to be meaningfully combined and compared. These programmes have implemented the principles of the Quality Assurance Framework for Earth Observation (QA4EO), which was adopted in 2008 by the Committee for Earth Observation Satellites (CEOS).
The adoption of QA4EO, and the comprehensive research programme that has followed it, have come from a fruitful and long-term collaboration between scientists working in National Metrology Institutes (NMIs) and the Earth Observation community, and especially the efforts of ESA to embed metrological principles in all its calibration and validation activities.
The European Association for National Metrology Institutes (EURAMET) has recently created the “European Metrology Network (EMN) for Climate and Ocean Observation” to support further engagement of the climate observation and monitoring communities with metrologists at national metrology institutes and to encourage Europe’s metrologists to coordinate their research in response to community needs. The EMN has a scope that covers metrological support for in situ and remote sensing observations of atmosphere, land and ocean ECVs (and related parameters) for climate applications. It is the European contribution to a global effort to further enhance metrological best practice into such observations through targeted research efforts and provides a single point of contact for the observation communities to Europe’s metrologists.
In 2020 the EMN carried out a review to identify the metrological challenges related to climate-observation priorities. The results of that review are available on the EMN website (www.euramet.org/climate-ocean) and include 32 identified research topics for metrological institutes. The EMN is now defining a strategic research agenda to respond to those needs. The EMN is also working with the International Bureau of Weights and Measures (BIPM) and the World Meteorological Organization (WMO) to organise a “metrology for climate action workshop” to be held online in October 2022.
Here we present the activities of the EMN and how they relate to the establishment of SI-traceability for satellite Earth Observations.
Society is becoming increasingly dependent on remotely sensed observations of the Earth to assess its health, help manage resources, monitor food security, and inform on climate change. Comprehensive global monitoring is required to support this, necessitating the use of data from the many different available sources. For datasets to be interoperable in this way, measurement biases between them must be reconciled. This is particularly critical when considering the demanding requirements of climate observation – where long time series from multiple satellites are required.
Typically, this is achieved by on-orbit calibration against common reference sites and/or other satellites, however, there often remain challenges when interpreting such results. In particular, the degree of confidence in the resultant uncertainties and their traceability to SI is not always adequate or transparent. The next generation of satellites, where high-accuracy on-board SI-traceability is embedded into the design, so-called SITSats, can therefore help to address this issue by becoming “gold standard” calibration references. This includes the ESA TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) mission, which will make hyperspectral observations from visible to short wave infrared with a target uncertainty of 0.3 % (k = 2).
To date, uncertainty budgets associated with intercalibration have been dominated by the uncertainty of the reference sensor. However, the unprecedented high accuracy that will be achieved by TRUTHS, and other SITSats, means that the reference sensor will no longer be the dominant source of uncertainty. The accuracy of cross-calibration will instead be ultimately limited by the inability to correct for differences between the sensor observations in comparison, e.g., spectral response, viewing geometry differences. The work presented here aims to assess the impact of these differences on the accuracy of intercalibration achievable using TRUTHS as a ‘reference sensor’ and evaluate to what extent these can be limited through appropriate design specifications on the mission.
A series of detailed sensitivity analyses have been performed to evaluate how the intercalibration uncertainty for TRUTHS and a given target sensor can be best minimised, based on potential TRUTHS design specifications. This includes the impact of TRUTHS’ bandwidth and spectral sampling definition, which was studied using a radiative transfer model to investigate how well TRUTHS can reconstruct target sensor bands for comparison. A similar simulation-based approach is used to evaluate the sensitivity of intercalibration to TRUTHS’ spatial resolution. Target sensor (Sentinel-2 MSI) images are resampled as a proxy to simulate TRUTHS images at a range of spatial resolutions. The ability of TRUTHS to reconstruct target sensor images is then assessed by resampling the simulated-TRUTHS image back to the spatial resolution of the target sensor. These simulation studies were carried out for a range of sites, including CEOS desert pseudo-invariant calibration site (PICS) Libya-4, representing the types of scenes that are used as targets in the sensor intercalibration process. Target sensors simulated in these studies include the widely used sensors Sentinel-2 MSI, Sentinel-3 OLCI and Suomi-NPP VIIRS, as they are representative of many of the types of sensors TRUTHS will be used to calibrate.
The main objective of the project “Precise Orbit Determination of the Spire Satellite Constellation for Geodetic, Geophysical, and Ionospheric Applications” (ID no. 66978), which was approved on 7 September 2021 in the frame of an ESA Announcement of Opportunity (AO), is to generate and validate precise reference orbits for selected Spire satellites and, based on this, to ingest and assess the requested Spire GPS data into three scientific applications, namely gravity field determination, reference frame computations, and ionosphere modelling to study the added value of the Spire GPS data. Due to the fact that the Spire constellation populates for the first time the Low Earth Orbit (LEO) layer at different inclinations with a large number of satellites, which are all equipped with high-quality dual-frequency GPS receivers, it opens the door to significantly strengthen all of the three above mentioned scientific applications.
In the initial phase of the project the focus will be on the precise orbit determination (POD) of selected Spire satellites. Two independent, state-of-the art software packages, namely the Bernese GNSS Software and ESA’s NAPEOS software, will be used for this purpose. This will allow for inter-comparisons, a role model that is inherited from the work of the POD Quality Working Group of the Copernicus POD service. It will enable an independent quality and integrity assessment of the Spire inputs and products.
We will analyse the quality of the Spire GPS code and carrier phase date and validate antenna phase centre calibrations. Based on this we will determine reduced-dynamic and kinematic orbits for selected Spire satellites. Eventually we will evaluate the quality of the reconstructed orbits by means of orbit overlap analyses, cross-comparisons of kinematic and reduced-dynamic orbits computed within one and the same software, and cross-comparisons of the orbits derived with the Bernese GNSS Software and ESA’s NAPEOS software, as well as comparisons to the orbits provided by Spire.
The present paper aims to showcase the portfolio of R&D activities that in these years the Italian Space Agency (ASI) is carrying out in collaboration with the national research community, to address scientific applications based on the exploitation of Synthetic Aperture Radar (SAR) data from the national mission COSMO-SkyMed, as well as Copernicus and other bilateral cooperation missions (e.g. SAOCOM).
The focus is on algorithms development and integration of multi-mission SAR data that are collected at different wavelengths, in light of the current unprecedentedly wide spectrum of observations of the Earth’s surface ranging from X- to L-band provided by SAR missions in Europe and beyond.
Within such a framework, COSMO-SkyMed First Generation, TerraSAR-X, Sentinel-1 and ALOS-2 satellites have been operating since several years. On the other side, SAOCOM, NovaSAR, COSMO-SkyMed Second Generation and RADARSAT Constellation Mission satellites have been successfully launched starting from late 2018. Therefore, the geoscience and remote sensing community is increasingly provided with: (i) continuity of observations with respect to previous SAR missions; (ii) opportunities to task collection of spatially and temporally co-located datasets in different bands.
The challenge is now to develop processing algorithms that can make the best out of this multi-frequency observation capability, in order to address a multitude of scientific questions and downstream applications. These applications include, but are not limited to: retrieval of geophysical parameters; land cover classification; interferometric (InSAR) analysis of ground deformation and for structural health monitoring; generation of value added products useful to end-users and stakeholders, for example for civil protection, disaster risk reduction, resilience building, sustainable use of natural resources.
In the framework of the ASI’s roadmap towards the development of SAR-based scientific downstream applications, recent R&D projects have act as the foundational steps to define, develop and test new algorithms for SAR data processing and integration. The R&D activities have intentionally focused from the statement of the initial scientific idea (Scientific Readiness Level – SRL 1, according to ESA EOP-SM/2776 scale) to, at least, the demonstration of the proof of concept (SRL 4) through extensive analyses by means of dedicated experiments and ground-truth validation.
Building upon this heritage and in order to move forwards (also in terms of higher SRL), ASI has recently launched a dedicated programme named “Multi-mission and multi-frequency SAR”. It supports R&D projects proposed by leading experts in the fields from national public research bodies and industry – also in the framework of international partnerships – to design, develop and test innovative methods, techniques and algorithms for exploitation of multi-mission/multi-frequency SAR data, with credible perspectives of engineering and pre-operational development, thus being able to contribute to the improvement of socio-economic benefits of the end-user community.
The current projects address the following R&D areas of specific interest: agriculture, urban areas, natural hazards, cryosphere, sea and coast; alongside the common cross-cutting topic of validation of products generated from multi-frequency SAR data by using ground-truth data (Figure 1).
In light of the experiences gained during recent R&D projects and at nearly one year since the initiation of the multi-mission and multi-frequency SAR projects, the present talk will outline the novelty of the methodological approaches under testing and demonstration, ongoing activities and results achieved, with a particular focus on the integration of Sentinel-1, COSMO-SkyMed, SAOCOM and ALOS-2 data.
Discussion will include:
- Lessons learnt about the role played by regularly acquired multi-frequency and multi-mission SAR time series (also in combination with other EO data) for observation continuity and investigation of long-term processes, either natural and anthropogenic ones;
- Benefits and limitations of acquisition programmes over the national territory and across different locations in the world vs. user requirements;
- The added value brought by the polarimetric SAR capability, e.g. for retrieval approaches of geophysical parameters;
- The importance of coupling satellite observations with instrumented sites and contextual surveys, for both calibration/validation activities and integrated analyses;
- Reflections about the perspectives towards future pre-operational implementation in scientific downstream applications.
The paper describes the realization of an access point to the CONAE SAOCOM mission, enabling the ordering and the dissemination of the products acquired by the SAR sensor over the geographic area in which the Italian Space Agency (ASI) has exclusive rights of exploitation of the data. SAOCOM (Satélite Argentino de Observación COn Microondas) is a L-band SAR based remote sensing constellation owned by the Argentinean Space Agency CONAE (Comisión Nacional de Actividades Espaciales) and formed by two identical 1A and 1B satellites, launched in 8 October 2018 and 30 August 2020 respectively. Within the collaborative project named SIASGE (Italian-Argentinian satellite system for Disaster Management and economic development), a certain amount of SAOCOM data acquisition and processing resources has been reserved to ASI for exclusive use in the so-called Zone of Exclusivity (ZoE) placed in the 10W-50E longitude range and 30-80N latitude range. In such zone ASI has the right to use the SAOCOM system freely, fully and up to the saturation of the granted resources (around 150s of sensing time per orbit), for scientific and institutional purposes by users constituted by the agency internal personnel or by people who -for the strict scopes of SAOCOM mission exploitation- became affiliated to ASI. At the time of writing, the ASI SAOCOM access point offers the capability to select and download products over the ZoE chosen in an archive containing more than 6k images but also experimental services for ordering the processing of SAOCOM data at various levels (from complex slant range up to geocoded) and programming new acquisition in the ZoE.
The development of the access point to mission resources has been based on the following approach and concepts:
• Reuse a reliable archive/catalogue system which is well known and full proven by the Remote Sensing community, possibly released under an Open Source license
• Maintain simplest possible interfaces for registration of users and for requesting products and new acquisitions
• Develop the access point in an incremental way, enriching the basic functions like registration and product dissemination with upper capabilities as managing new acquisitions request
• Use a storage/computation informatic infrastructure based on cloud resources owned by public Italian organizations
• Maintain a strict cooperation with the CONAE SAOCOM team for improving the interactions (at interfaces, archive contents, etc level) between the ASI access point and the Argentinean mission GS
Under these rationales and concepts, the access point has been realized with:
• the ESA developed DHuS - Data Hub System as the catalogue / archive system, widely adopted in the Copernicus Sentinel GS, which offers both a traditional web based human interface but also OpenSearch and OData M2M (Machine to Machine) product search and download capabilities
• ASI internally developed SW for handling user registration, incremental archive filling (through discovery/download actions over the Argentinean mission GS) and for product reprocessing and new acquisition programming experimental functions
• A cloud infrastructure based on OpenStack framework running on GARR, the Italian ultra-broadband network dedicated to the education and research community, having as main objective to provide high-performance connectivity and to develop innovative services for the daily activities of teachers, researchers and students and for international collaboration
• The collaborative support of CONAE for the transfer of the entire SAOCOM product archive on the ASI ZoE (near 130k product) and for the set-up of M2M interfaces by the SAOCOM GS and the ASI access point
The Advanced Land Observing Satellite (ALOS) was launched by the Japan Aerospace Exploration Agency (JAXA) in January 2006. It carried three sensors: the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2), the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), and the Phased Array type L-band Synthetic Aperture Radar (PALSAR).
The data for observations over Africa, the Arctic and Europe were collected by two European Space Agency (ESA) ground stations as part of the ALOS Data European Node (ADEN), under a distribution agreement with JAXA, and subject to a recent bulk processing campaign. The latter focused on processing data from the AVNIR-2 and PRISM sensor only, from Level 0 (raw) – Level 1C (orthorectified).
The quality control activities concerning the L1B1 and L1C datasets for both sensors were performed prior to their release / dissemination to users. The quality control activities concerning the brand new L1C products, which were generated using an instrument processing facility developed by the German Aerospace Centre (DLR), included DIMAP product format (including the ESA EO-SIP product wrapper format), geometric and radiometric calibration quality checks. The results of these quality checks, which will be presented in more detail in the poster, generally indicate the data quality is nominal.
----
The Landsat programme, jointly operated by the United States Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA), provides the world’s longest running system of satellites for the medium-resolution optical remote sensing of land, coastal areas and shallow waters.
The data acquired over Europe by the European Space Agency (ESA), using their ground stations (in co-operation with the USGS and NASA), has been subjected to a recent bulk L1C reprocessing campaign. The reprocessed dataset, generated by the Systematic Landsat Archive Processor (SLAP) Instrument Processing Facility (IPF), developed by Exprivia for the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) datasets, allows for the historical Landsat products to be updated / aligned with the highest quality standards that can be achieved with the current knowledge of the instrument (e.g. geometric processing via applications of orbit state vector files, an updated TM/ETM ground control points database and digital elevation models).
The quality control activities concerning the L1C datasets for both sensors were performed prior to their release / dissemination to users. The quality control activities concerning the new L1C products, included product format (including the ESA EO-SIP product wrapper format), geometric and radiometric calibration quality checks. The results of these quality checks, which will be presented in more detail in the poster, generally indicate the data quality is nominal.
SAOCOM-1A and -1B are a pair of satellites developed by the Comisión National de Actividades Espaciales (CONAE) of Argentina launched in October 2018 and August 2020, respectively, orbiting on a Sun-synchronous orbit at 620 km altitude 180 deg. apart, and providing imaging of the Earth's surface with an effective revisit time of 8 days. Their main payload is a full-polarimetric Synthetic Aperture Radar (SAR) operating in L-band with selectable beams and imaging modes.
This paper reports the first result of collaborative work between CONAE and CSL, investigating the use of polarimetric and interferometric SAR signatures to detect changes in agricultural zones in Argentina. This work is the natural continuation of a previous pre-SAOCOM activity [1] that made use of airborne SAR images from the national SARAT sensor and the NASA/JPL UAVSAR, and from the spaceborne JAXA ALOS-PALSAR-2, all operating in L-band.
The test site of interest is referred to as the SAOCOM Core Site, a highly agricultural zone within the Pampas region, located in the surroundings of Monte Buey (a small rural village southeast of the Córdoba province, Argentina (-32º 55', -62° 27')). This site is regularly imaged by the SAOCOM satellites, and regular field measurements are carried out in conjunction with image acquisitions.
We have used full-polarimetric, Stripmap-mode SAOCOM-1A images over the region of interest, acquired between March 2019 and February 2020 in both right-looking ascending and descending orbits, making a temporal series of useable interferometric pairs covering a full year.
Each image was the subject of polarimetric processing, involving generation of backscattering coefficient and Radar Vegetation Index (RVI) maps, Pauli decomposition of the scattering vector, generation and diagonalization of the polarimetric coherence matrix, resulting in derived quantities like the entropy, the anisotropy, and the alpha angle. Interferograms and coherence maps were generated by InSAR processing of the interferometric pairs. Finally, adding full polarization information to the interferometric, i.e., carrying out polarimetric interferometry (PolInSAR) processing, optimized coherence maps were generated, allowing to highlight one or the other backscatter mechanism and follow the evolution along a full season.
This multi-dimensional information was put in relation to terrain events by cross-correlation with field data. All the products were co-registered in order to perform time-series analyses for change detection.
This work was performed under a Belgium-Argentina bilateral collaboration. The CSL contribution was supported by the Belgian Science Policy Office.
Reference
[1] Danilo J. Dadamia, Marc Thibeault, Matias Palomeque, Christian Barbier, Murielle Kirkove and Malcolm W.J. Davidson, “Change Detection Using Interferometric and Polarimetric Signatures in Argentina, 8th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry”, ESRIN, Frascati, 23-Jan-2017.
Monitoring transportation for planning, management, and security purposes in urban areas has been growing in interest and application by various stakeholders. Since the late 1990s, commercial very-high-resolution (VHR) satellites have been used for developing vehicle detection methods, a domain previously governed by aerial photography due to superior spatial resolution. Despite the apparent advantages of using air- or drone-borne systems for vehicle detection, several methods were introduced in the last two decades, utilizing space-borne VHR imagery (e.g., QuickBird, WorldView-2/3) with meter (multispectral bands) to submeter (panchromatic band) resolutions. Several of the applications applied machine learning for identifying parking cars. However, for detecting moving cars, two sensor capabilities have been utilized: (1) stereo mode by either satellite constellation or by body pointing abilities; and (2) a gap in the acquisition time between the push-broom detector sub-arrays. Changes in the location of moving objects can be observed between image pairs or across spectral bands, respectively. Both cases require overcoming differences in ground sampling distance and/or prerequisite spectral analyses to identify suitable bands for change detection.
Since January 2018, new multispectral products have been available for the scientific community provided by the Vegetation and Environmental New Micro Spacecraft (VENµS). This mission is a joint venture of the Israeli Space Agency (ISA) and the French Centre National d’Etudes Spatiales (CNES). The overall aim of the VENµS scientific mission is to acquire frequent, high-resolution, multispectral images of pre-selected sites of interest worldwide. Therefore, the system is characterized by the spatial resolution of 5 m per pixel (the upcoming mission phase will increase the spatial resolution to 4 m per pixel), the spectral resolution of 12 narrow bands in the visible-near infrared regions of the spectrum, and revisit time of 2 days in the same viewing and azimuth angles.
Here we demonstrate the VENµS capability to detect moving vehicles in a single pass with a relatively low spatial resolution. The VENµS Super Spectral Camera (VSSC) has a unique stereoscopic capability since two spectral bands (number 5 and 6), with the same central wavelength and width (620 nm and 40 nm, respectively), are positioned at extreme ends of the focal plane (Figure 1). This design enables a 2.7-sec difference in observation time. We took a straightforward approach to create a simple spectral index for moving vehicles detection (MVI) using these bands. Since the two bands are identical, there is no need for prior image analyses for dimensionality reduction or geometric corrections, as required for other sensors. Each moving vehicle is represented by a pair of bright blob-shaped clouds of pixels on a darker background (Figure 2). The center of each cloud in the pair is determined based on the same methodology used to identify the barycenter of a multi-particle system, where the MVI values replace the masses of the particles. Once the center of each cloud is known, the velocity vector, i.e., speed magnitude and orientation, can be extracted by geometrical considerations.
Results show successful detection of moving small- to medium-size vehicles. Especially interesting is the detection of private cars that are on average 2-3 m smaller than the ground sampling distance of VENµS. We effectively detected vehicle movement in different backgrounds/environments, i.e., on asphalt and unpaved roads, as well as over bare soil and plowed fields, and at different speeds, e.g., 61 km/h for a car over an asphalt road, and 19 km/h for vehicles on unpaved road. A speed of 111 km/h was calculated for a heavy train. This speed is in line with the engine speed limit and the regulations applied by Israeli authorities, providing estimation for MVI accuracy.
The MVI benefits from the coupling of a unique detector arrangement of the Super Spectral Camera onboard VENµS. In addition, the very high temporal resolution of 2 days makes VENµS products an attractive input for vehicle detection applications, particularly for operations that require monitoring on a nearly daily basis. It appears to be cost-effective compared to VHR commercial satellite and complex UAV-base monitoring systems. Furthermore, the MVI suggests that such bands arrangement is highly effective and should be considered for future space missions, primarily for surveillance and transportation monitoring.
Passive microwave observations in L-band are unique measurements that allow a wide range of applications, which in most cases cannot be done at other wavelengths: accurate absolute estimations of soil moisture for hydrology, agriculture or food security applications, ocean salinity measurements, detection and characterization of thin ice sheets over the ocean, detection of frozen soils, monitoring above-ground biomass (AGB) to study its temporal evolution and global carbon stocks, measurements of high winds over the ocean… The Soil Moisture and Ocean Salinity (SMOS) satellite, launched by ESA in 2009, which has performed for the first time systematic passive observations at L-band, has allowed to discover some of the previous applications. SMOS L-band data play a central role in the ESA Climate Change Initiative (CCI) for Soil Moisture and Ocean Salinity. Passive L-band data also contribute to the CCI Biomass. This European mission has been followed by two other L-Band missions from NASA: Aquarius and SMAP (Soil Moisture Active Passive).
In the last years, scientific and operational users were requested to contribute to a survey of requirements for a future L-band mission. One of the outcomes of this survey is that most of the applications require a resolution of around 10 km. This is the objective of SMOS-HR (High Resolution) project, a second generation of SMOS mission: the continuation of L-band measurements with an unprecedented native resolution improving by a factor of 2 to 3 that of the current generation of radiometers such as SMOS and SMAP.
In this paper, we will present this SMOS-HR project which is currently under study at CNES (the French Centre National d’Etudes Spatiales) in collaboration with CESBIO (Centre d’Etudes Spatiales de la BIOsphère) and ADS Toulouse (Airbus Defence & Space) which has been contracted by CNES for the instrument definition.
The main challenge for this CNES study is to find the best trade-off to satisfy most needs with “reasonable” mission requirements (i.e., feasible for an acceptable cost). The core mission objective for SMOS-HR is to increase the spatial resolution at least by a factor of two with respect to SMOS (< 15km at Nadir) while keeping or improving its radiometric sensitivity (~0.5-1 K) and with a revisit time no longer than 3days. Taking into account the mission and system level requirements, a new definition of an interferometric microwave imaging radiometer has been studied.
The first step has been to select the antenna array configuration: cross-shaped arrays, square-shaped arrays (which imply a Cartesian gridding), Y-shaped arrays and hexagon-shaped arrays (which imply a hexagonal gridding) have been compared. A cross shape has been selected as the best option because it allows to reduce the aliasing in the reconstructed images by adequately choosing the position of the elementary antennas along the four arms and because its accommodation is simpler than for some other configurations. The result is an instrument with 171 elementary antennas regularly spaced along the arms (~1lambda) and an antenna with an overall size of ~17 meters tip-to-tip. Then, the optimal concept for the SMOS-HR instrument consists of a hub located on the platform, carrying a dozen central antennas, and four deployable arms attached to the platform, carrying about 40 antennas each. The SMOS-HR hub gathers a Central Correlator Unit computing the correlations for all antenna pairs and generating a clock signal for instrument synchronization. The feasibility of on-board processing for Radio-Frequency Interference (RFI) mitigation has also been addressed to overcome the limitations faced on SMOS with on-ground processing. Adding this function for SMOS-HR represents another major improvement compared to SMOS (on top of the resolution improvement).
As a risk reduction activity, a breadboard of a part of the Central Correlation Unit is defined, developed and tested by ADS during this study in order to assess the achievable performances and functionalities of SMOS-HR on-board processing.
Finally, the SMOS-HR phase A has also been the opportunity to explore innovative calibration strategies based on SMOS lessons learnt.
As a synthesis, this talk will present successively:
• SMOS-HR mission and system level requirements,
• The main trade-off at instrument and sub-systems levels (antenna configuration, deployment structure, elementary antenna, on-board correlator, RF receiver, power and local oscillator distribution, calibration strategy…),
• The current results of the correlator breadboard pre-development.
LibGEO is a multi-sensor geometric modeling library, with high location precision. The library is designed to be used at different steps of an Earth Observation mission: prototyping, ground segments, calibration. It is first developed to meet CNES/Airbus Defense and Space, CO3D mission requirements and then to be the CNES reference library for geometry.
The base function of LibGEO is the direct location which returns ground coordinates for each pixel coordinate of the image. It supports both mathematical (grid or RPC) and physical modeling. For physical modeling, a line of sight is built from the detector and is transformed using, for example, rotation, translation, homothety, mirror reflection, to get the line of sight in the International Terrestrial Reference Frame (ITRF). With the position of the platform and an ellipsoid model of the earth, the ground position can be computed. Other location functions are provided such as inverse location, intersection on DEM, colocation. Each location function has a grid implementation, which creates grids to resample images in different geometries (including orthoimages). LibGEO also deals with stereo images for 3D model reconstruction. It has the ability to intersect lines of sight from correlated image points to get a 3D point. LibGEO computes the epipolar geometry that allows dense correlation of the 3D reconstruction.
Native location is not precise enough for some applications, therefore an optimization of the model parameters can be done by using Ground Control Point (GCP) and tie points in image geometry. The improvement of absolute and relative location is done where the user can set uncertainties on both observations and model parameters. During the optimization process, points with higher residual errors are filtered out using statistical methods.
Supported sensors are Pleiades HR, Sentinel-2 MSI (L1B), CO3D for now and some will be added soon: Thrisna, 3MI/Metop SG, Microcarb. The library is built to be generic, other sensors could easily be supported by simply plugging in a product format handler.
The library is designed to be easily integrated in any operational processing chain thanks to its C++ API but it is also user friendly for prototyping and expertise through the Python API.
The Government of Canada (GC) uses multiple sources of data to provide services to Canadians. Given the geographic size of Canada and the need for data collection beyond our landmass, this can often be most efficiently accomplished through Earth Observation. The most critical of these are the RADARSAT series of satellites. Using a powerful synthetic aperture radar (SAR) to collect digital imagery, the RADARSAT series of satellites can “see” the Earth day or night and in any weather condition. The next generation solution is currently being investigated under the Earth Observation Service Continuity (EOSC) program. Due to the wide range of user’s needs, the EOSC initiative considers a broad source of input data including free and open data, commercial purchase of data, international cooperation and a dedicated SAR system. This paper will look at some initial analysis on undertaken under EOSC.
As a first step, a list of User Needs has been collected from various Canadian Federal User Departments. The list of user needs has been consolidate under the Harmonized User Need document. A few key consideration could be extracted from this list of User Needs. The area and the coverage frequency increased compared to the RCM requirements and capabilities. Established applications such as ice monitoring would benefits from both an increase in the coverage frequency and resolution. Even for these established applications, gaps remains to measure some parameters of high interest such as ice thickness. Finally, the document highlight the importance of the access to multi-frequency data for several needs.
The second step was to perform a series of option analysis studies with eight industrial partners to ensure a wide coverage of the potential solution that could satisfied the complex set of user needs. Although no specific solution has been selected at this stage, the studies generally pointed to some extend toward some similar findings. A dedicated C-Band resource will be needed to meet the User Needs including some form of multi-aperture/digital beamforming capability will be required to meet the swath and resolution requirements. Challenges remains to simultaneously meet all User Needs as a broad range of frequency, polarization, coverage and resolution is required often conflicting over similar or adjacent AOI. Free and open, commercial data and international cooperation with other existing system are key to respond to these User Needs while limiting the overall system complexity and cost.
Targeted technology development activities are being to address item of lower technology readiness including enabling technologies for multi-aperture/digital beamforming antenna but also ground segment technology development to provide a better integrated planning of the dedicated EOSC resources taking into consideration all available external sources of data.
Current scenario is moving towards the implementation of tools able to comply with application needs, on-board: to have the information required by end-users at the right time and in the right place. And the place is more and more often becoming the space segment, where the availability of actionable information can be a game-changer.
In this approach part of the EO value chain is transforming. Value is shifting from the sensed data (that nowadays are becoming a commodity) to “insights” and actionable information. So, components of the chain are being moved from user’s desktop to the cloud and from ground to space. Actually, as a final result, user will no longer need to be aware of what data are providing the information he looks for, or where these are stored and processed. The application will be the core and the details related to its workflow (data acquisition, processing, selection, information extraction…) can even be completely transparent to users. Or practically user may only define what he really cares of, everything else
This is the scenario the AI-eXpress services (AIX in short) are enabling. AIX makes available satellite resources and on-board applications as-a-service. Customers can pick-up the application they need from the AIX app store, configure and run it on the satellite already in orbit. The system will take care of scheduling the data acquisition, transforming data into actionable information and also raising near real-time alarms when services require. It is based on the SpaceedgeTM on-board artificial intelligence-based application framework, on distributed ledger technologies (blokchain) machine-to-machine interfaces, on the high-performance computing cluster and finally on the ION cargo spacecraft vehicle.
AIX is a gamechanger. It processes data where it’s more convenient, starting on-board at the “space edge”; it turns EO product generation into services, making the satellite transparent; it makes on-board resources flexible enough to fit to different applications and address different needs, thanks to AI and DLT technologies advances.
AIX fosters the transition from a traditional space model to a really commercial one, reducing bottlenecks and barriers, enabling new market opportunities to flourish and enhancing the effectiveness of the services delivered to the ground.
Emerging NewSpace companies may now test their innovative AI algorithms and their proof-of-concept directly in space and prove their value to the market. Traditional space institutions and research entities may test a new approach changing from “makers” to “enablers”.
AIX builds an infrastructure open to integrate third-party resources and services and aims at building a full eco-system. Thanks to the quick service deployment test and operational capabilities, it candidates to be a strategic assets for commercial applications ranging from oil and gas asset monitoring and management to energy networks (market estimated in 1.3 B€ in 2029 by NSR).
It also enables a large variety of government services supporting both the ESA “accelerators” strategy and the main pillars of the EU Green Deal and the Digital Strategy and fits, as well, as a Copernicus contributing asset in line with the last Request for Ideas for new Copernicus Contributing Missions.
As the advent of New Space becomes reality – new satellite tasking strategies, increased acquisition capacities, EO data distribution channels and user expectations are all changing out of recognition.
New Space affects the traditional EO operational scenario, which relies currently on strict boundaries between data and value added or data stream service providers, however it brings to the CCM activity many disruptive innovation approaches and the promise of new solutions to complex existing challenges, as e.g. fast response times combined with smooth data delivery. Cloudification of workflow processes improves greatly the product availability for users in terms of usability; the data is immediately accessible and exploitation can happen directly on cloud platforms, minimizing product dissemination flows and as well the time-to-exploitation costs. Data as a Service (DaaS) is a consolidated approach, and evolving EO data marketplaces are offering domain specific tool support and downstream applications to maximise the take up and utility of the data by the Copernicus user community. Collaboration and chaining of products to tailor specific user requirements will be a challenge to maximize the exploitation and re-use of EO data.
At the same time, we have new questions that directly impact service sustainability. Among these is how to collaboratively build and promote best operational practices among the growing number and diversity of emerging, new, established actors. The flexible management of demand-oriented data offer and the improvement of operational processes in terms of standardization and simplification are big new challenges for connecting the Copernicus user community quickly to the necessary source data. New Space is changing the landscape of CCM providers, and requires managing operational scenarios in which scalability and diversity are the main drivers, and an Ecosystem of related services concentrating on streamlining and simplification is paramount. Within such an Ecosystem clear roles and responsibilities need to be guaranteed as the reliance on independence and brokerage becomes a key component. In this new paradigm new actors can enter the scenery as the discovery/gatekeeper role, aiming to understand trends in new space technologies, liaising with the Service users to foresee and anticipate coming needs and implement these within the service.
The NovaSAR mission is a UK technology demonstration mission of a small Synthetic Aperture Radar (SAR) satellite. It is a partnership between the UK Space Agency (UKSA), Surrey Satellite Technology Limited (SSTL), the Satellite Applications Catapult (Catapult) and Airbus, with UKSA, SSTL, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Indian Space Research Organisation (ISRO) and the Department of Science and Technology-Advanced Science and Technology Institute (DOST-ASTI) sharing the observational capacity of the satellite. NovaSAR-1 was launched in September 2018, with the start of the service starting in late 2019, with a nominal lifespan of 7 years.
NovaSAR was built as a low-cost payload demonstrator, with a manufacturing cost of just 20% of traditional SAR satellites, while ensuring flexible and good performance SAR imaging capabilities. It is a S-band SAR satellite, opening up new observation capabilities to users since most space-borne SAR missions have so far focussed on C-, L- or X-band. It has a revisit time of around 14 days, and a range of acquisition modes available. More importantly, it is the first civilian SAR mission to carry an AIS receiver on board, meaning simultaneity of SAR observation and AIS message reception which has not been possible before. To strengthen this capability, it is equipped with a maritime acquisition mode, which is designed to maximise ship detection over large areas of sea or ocean.
Analysis Ready Data (ARD) has become an increasingly important feature of making satellite mission data more accessible and usable by a wider audience, including non-specialists in Earth Observation. An ever-increasing number of services and applications rely on ingesting and analysing ARD, and so making NovaSAR data available in an ARD format is seen as key for all mission partners to realise their key mission objectives. Among these are to increase the uptake of medium-resolution SAR data through the development of novel applications, supporting respective government-mandated scientific objectives and increasing national expertise in the use of SAR data.
This study will show the creation of a NovaSAR ARD pipeline, with a large collaborative effort from mission partners to align ARD processing flows and work together to resolve some ongoing issues with the NovaSAR data. The aim is to produce ARD alongside level 1 NovaSAR data, that are compliant with CEOS ARD For Land (CARD4L) standards, for both Stripmap and ScanSAR acquisition modes, thus covering all NovaSAR acquisition modes (with the exception of maritime mode). A timeline towards the goal of routinely making available NovaSAR ARD, along with details of specific applications this will enable, will be presented.
As the global constellation of satellites and missions grows, so access to this constellation becomes more complex and challenging for large organisations and institutions. Many such organisations have perennial needs for imagery but require the variety of image sources available to cover their wide variety of use cases. While some satellites provided the best possible spatial resolution, others provide very high temporal resolution, while others providing key imaging bands. These image sources are complementary and all part of a necessary solution for large organisations that wish to have robust and flexible access to the constellation as a whole. The challenge with this is that simply procuring imagery from these various sources through independent and separately negotiated contracts does not provide a convenient solution for these organisations. It is cumbersome, inefficient and inflexible. As well as dealing with multiple operational, ordering and delivery interfaces, there are commercial challenges around pricing, licensing terms and supplier service level performance. Consolidation is required in order to present the customer with a workable and efficient multi-supplier solution to their imaging needs. Consolidation takes makes many forms, but the following aspects are key: a single enterprise platform that specifies the terms for supplier on-boarding and compliance and acts as a vehicle for supplier lifecycle management and user accounts and budgets on a project-by-project basis; an operational dashboard for requesting and delivering imagery from on-boarded suppliers as well as hosting and archive management; various technical support tools (image alerts from aggregated catalogues, multi-mission planning for assisted tasking), and standardisation as far as possible (mandatory licensing terms, pricing for particular image configurations, etc). The enterprise platform can be evolved over time. New suppliers can be onboarded over time and terms and standards can be upgraded as appropriate. In terms of the operational dashboard, this can include access to aggregated catalogues from the on-boarded suppliers (i.e. our EarthImages platform) and standardised tasking requests which are carried out either in a managed or competitive manner by the suppliers, depending on their ability to meet the specification of the request (i.e. Earthimages-on-Demand, developed under funding from ESA). We will describe this new platform and show how it could play a role in the Copernicus CCM programme.
GNSS Radio Occultation (RO) observations from space have been successfully demonstrated since many years and their value for weather prediction is indisputable. The demand for higher spatial and temporal RO observations is steadily increasing as more and more applications in the downstream market rely on pin point accurate weather prediction. The exploitation of radio waves not only bended within the atmosphere but also reflected on the earth surface for analysis of surface features is a more recent development and from the perspective of the instrument architecture closely related to the RO method.
The GRAS instrument on MetOp first generation still provides observation data of unprecedented quality which is a result of high end GNSS receiver design including antenna, clock, LO and RF design. In contrast to this the PRETTY mission will provide GNSS passive reflectometer (PR) data with simple RF and antenna architecture which is suitable to be accommodated on a 3U CubeSat. With this approach good performance can be reached at a fraction of the cost. For the PRETTY development a COTS approach has been followed, and the costs for the signal processing and development of the high-level software have been significantly reduced.
We are presenting a concept in which the GRAS instrument is enhanced with the PRETTY signal processing part allowing to provide RO and PR observations to be performed with the same high-end performance. The high gain antennas and the front end based on the Saphyrion G3 architecture are used to provide baseband samples to the two different signal processing cores, one based on the AGGA-4 architecture the other one on a System on Chip using an ARM processor. The enhanced GRAS instrument can be used as hosted payload on small satellites and will provide both high quality RO and PR observations.
For the planned NanoMagSat constellation mission the University of Oslo (UiO) will contribute a multi-needle Langmuir probe (m-NLP) system. The m-NLP is a compact, light-weight, power-frugal instrument providing in situ ionospheric plasma density measurements. Typically, Langmuir probes operate by sweeping through a range of bias voltages in order to derive the plasma density, a process that takes time and hence limits the temporal resolution to a few Hz. However, the m-NLP operates with fixed bias voltages, such that the plasma density can be sampled at 2kHz, providing a spatial resolution finer than the ion gyroradius at orbital speeds. The NanoMagSat m-NLP design is based on heritage from sounding rockets, CubeSats, SmallSats, and an International Space Station payload. In this talk, we will present the science requirements for the NanoMagSat constellation, alongside the derived system design. A new feature of the NanoMagSat m-NLP is its capability to operate any number of probes in fixed-bias mode, while others are sweeping the bias voltage. This allows for the simultaneous high-resolution determination of the plasma density (2kHz), alongside low-resolution measurements of the electron temperature (a few Hz). Furthermore, the synergy between the in situ plasma density/electron temperature measurements and the magnetic measurements made on board NanoMagSat will be discussed. Finally, initial results from instrument tests within the UiO plasma chamber will be presented.
In the frame of the CubeGrav project, funded by the German Research Foundation, Cube-satellite networks for geodetic Earth observation are investigated on the example of the monitoring of Earth’s gravity field. Satellite gravity missions are an important element of Earth observation from space, because geodynamic processes are frequently related to mass variations and mass transport in the Earth system. As changes in gravity are directly related to mass variability, satellite missions observing the Earth’s time-varying gravity field are a unique tool for observing mass redistribution among the Earth’s system components, including global changes in the water cycle, the cryosphere, and the oceans. The basis for next generation gravity missions (NGGMs) is based on the success of the single satellite missions CHAMP and GOCE as well as the dual-satellite missions GRACE and GRACE-FO launched so far, which are all conventional satellites.
In particular, feasibility as well as economic efficiency play a significant role for future missions, with a focus on increasing spatio-temporal resolution while reducing error effects. The latter include the aliasing of the time-varying gravity fields due to the under-sampling of the geophysical signals and the uncertainties in geophysical background models. The most promising concept for a future gravity field mission from the studies investigated is a dual-pair mission consisting of a polar satellite pair and an inclined (approx. 70°) satellite pair. Since the costs of realizing a double-pair mission with conventional satellites are very high, alternative mission concepts with smaller satellites in the area of New Space are coming into focus. Due to the ongoing miniaturization of satellite buses and potential payload components, the CubeSat platform can be exploited.
The main objective of the CubeGrav project is to derive and investigate for the first time optimized cube-satellite networks for Earth’s gravity field recovery, with special focus on achievable temporal and spatial resolution and reduction of temporal aliasing effects. In order to achieve the overall mission scope the formation of interacting satellites including the inter-satellite ranging measurements, relative navigation of the satellites and networked control of the multi-satellite system are also analyzed in a second step. A prerequisite for the realization of a CubeSat gravity mission is the miniaturization of the key payload, such as the accelerometer, which measure the non-gravitational forces such as the drag of the residual atmosphere, and the instrument for highly accurate determination of the ranges or ranges rates between the satellites.
This contribution presents recent results of the CubeGrav project and a preliminary mission concept and focuses on the scientific added value compared to existing satellite gravity missions. A set of miniaturized gravity-relevant instruments, including accelerometer and the inter-satellite ranging instrument, with realistic error assumptions are identified for a usage in CubeSats, and their capabilities and limits on determining gravity field are investigated in the frame of numerical closed loop simulations. The applicability of the above is further translated into potential preliminary satellite bus compositions and achievable orbital baselines. By this approach we can identify the minimum requirements regarding instrument performances and satellite system design. Additionally, different satellite formations and constellations will be analysed regarding their potential of retrieving the temporal gravity field.
Satellite-based Earth observation data is today the most available it has ever been and still struggles to meet the supply demands from its customers. Meeting end user demand is challenged by conflicting needs such as tasking priority, coverage, quality, spectral band selection, resolution, and product latency. Traditionally priority goes to the highest bidder, leaving emerging applications requiring scientific quality behind or limited to the high-quality government missions. The EarthDaily Satellite Constellation (EDSC) is a customer requirement driven operational enterprise solution for monitoring that works interoperably with government science missions, removes priority tasking by imaging the Earth’s landmass every day, and delivers a flexible scientific-grade product offering designed to seamlessly integrate with machine learning and artificial intelligence algorithms powering geoanalytics applications.
In 2023, EarthDaily Analytics will be launching the 9-satellite EDSC. It will be the world’s first Earth observation system planned, from the ground-up to power machine-learning and artificial intelligence-ready geoanalytics applications on a daily global scale. The processing, calibration and QA engine behind our constellation is the EarthPipelineTM which has been in development for more than eight years and is the world’s first ground segment pipeline as a service. The EarthPipelineTM is our cloud-native processing service that transforms raw downlinked satellite data into high-quality Analysis Ready Data and is designed and tested to handle quality, scale and automation for all sensor types and modalities. This service is based on rigorous satellite and physical modelling, combined with the latest advancements in computer vision and machine learning to automatically produce the highest quality scientific-grade satellite imagery products on the market at scale.
With 20+ spectral bands well aligned with leading science missions, including Sentinel-2 and Landsat-8, and backed by the EarthPipelineTM’s continuous calibration engine, the EarthDaily Mission will be unprecedented monitoring and change detection solution for near real-time situational awareness of the natural environment at scale.
In addition to the global need for environmental stewardship, the market itself demands better monitoring of the environment across all industries due to investor demands and financial risks imposed by climate change and environmental degradation. While open data solutions are more widely available than ever, a persistent need for daily, global scientific quality spectral bands paired with analysis-ready data production remains. Daily scientific data means a chance of cloud-free observation every few days is almost guaranteed and can be used to feed better phenological modelling such as tree carbon accounting and agriculture yield. EDSC includes Short Wave Infrared Bands (SWIR) to dramatically improve landcover differentiation, fire delineation, and improved atmospheric correction and mask generation. Other specialized bands will provide global daily services for scouting the presence of methane anomalies, forest fire detection, impact and risk assessment, water quality evaluation, and carbon cycle monitoring which all serve as a vital input for large-scale climate modelling and mitigation. EDSC’s combination of spectral bands and interoperability with the gold standards of Earth observation (Landsat, Sentinel, and other government science missions) will offer end users an unprecedented combination of daily global coverage, quality and resolution that will deliver impactful solutions to many of the world’s most pressing challenges.
Born from the “Sharing Economy”, OpenConstellation proposes a different way of getting access to satellite imagery. Buying a single satellite, or a small fleet, does not usually provide significant coverage capacity, and the revisit is definitely disappointing. Trying to acquire large, recent coverages in the commercial market proves to be difficult and expensive when quality is a strong requirement. Not to talk about getting real-time imagery in case of emergency, or about the homogeneity problems when using different sources.
Open Cosmos constellation is designed to solve these problems by creating a structure where constellation partners share their spare capacity with the others. Open Cosmos ensures a seamless operation, clear sharing rules and provides all the technical infrastructure, from ground stations to analysis-ready data, in order to make this possible. Becoming a member of the OpenConstellation implies that the investment is immediately multiplied by the sharing effect, and all the constellation management, from downloading to processing and sharing is greatly simplified.
Open Cosmos data platform is the final element in this chain, and provides seamless access to images and derivative products available to partners, downstream providers and final users.
The OpenConstellation is designed to support the services and applications required by its partners, by providing unprecedented access to reliable and affordable satellite imagery and connecting it to a powerful data platform. This allows constellation members to become more efficient and competitive by enabling a full new line of space-data-driven decisions.
The year 2022 will see the first three to five satellites of the constellation launched, and the final schedule for the launch of the full constellation will be fixed.
What is different between the OpenConstellation and other initiatives?
- Cost effective satellite imagery:
Being a member of the OpenConstellation will result in accessing 10x more affordable data.
- The OpenConstellation is born from collaboration:
And thus makes an efficient use of resources by sharing the unused capacity of some satellites to serve other members’ needs.
- The OpenConstellation is built from user needs:
Satellites that become part of the OpenConstellation are coming from actual user demand. No technology push syndrome.
- The OpenConstellation is varied:
Most constellations are based on replicating the same satellite type. The OpenConstellation offers a set of complementary sensors better suited for the demanding new applications. “If you only have a hammer, all problems are nails”.
- Access to the OpenConstellation is provided through a state-of-the-art data platform:
This simplifies the process of finding and managing data, and offers an ecosystem of applications from value-added providers ready to provide standard (e.g. change detection) and bespoke solutions.
- The OpenConstellation evolves quickly with new technological advances:
New sensor technologies are developed every day. The OpenConstellation is enhanced with technological advances much faster than any other due to its open design.
The presentation will describe in depth the constellation design (orbital planes, number of satellites, technical features), the mechanisms designed to effectively implement the capacity sharing, and will demonstrate some early use cases of the data platform run with actual customers.
Observing the Earth and understanding its evolution is fundamentally linked to the ability to collect large amounts of data and extract the needed intelligence to derive and validate the models of such an incredibly dynamic system. While on one side this ability is enabled by the growing number of data sources, on the other side next generation Space platforms, integrating powerful on-board processing capabilities, are bringing transformational opportunities to Earth sciences. As a consequence, on-the-edge computing in Space is becoming a reality thanks to the multiple Teraflops performance of new spacecraft platforms. On top, and in addition to performance, the community is looking for missions with higher spatial and temporal coverage (requiring constellations of satellites) and more rapid design cycles.
LuxSpace has been at the forefront of rapid satellite development already before New Space was a known phrase, developing and building 2 Vesselsats in one year, and the first private-funded Moon mission (4M) in less than half a year.
Today, LuxSpace is developing the innovative Triton-X platform that uses extensive spin-in from automotive and other earth-bound industries to achieve a winning combination of high quality and performance with low recurrent cost and turn-around time. Triton-X will be a modular range of platforms from about 30 to 250 kg, adaptable to a wide range of payloads up to the 100kg class. Triton-X is being developed with the support of ESA, giving LuxSpace access to a huge store of expertise and advise, while keeping the freedom to use New-Space approaches.
Key aspect of Triton-X is the on-board processing power of its integrated avionics unit (IAU) and its robust modular architecture that results in high reliability and robustness on the basis of high-performance low cost high-end-COTS electronics. This architecture can scale easily to different mission sizes and demands, being adaptable in software to the specific needs of the applications. The key elements of this architecture are being developed by LuxSpace and a small core group of partner companies. Due to the low recurring cost of the avionics, Triton-X is especially well suited for small constellations of high-performance satellites.
A number of missions focused on Earth resources monitoring have been studied for potential customers, including atmospheric trace-gas monitoring, maritime surveillance, spectrum monitoring, in-orbit demonstration of a large set of payloads, and others.
Mission Agile Nano-Satellite for Terrestrial Image Services (MANTIS) is a nano-satellite designed to monitor and help understand oil & gas energy supply chains. Oil & gas energy supply chains are highly topical because of their criticality to the development of humankind, but also for the impact they have on nature and the Earth’s climate. They are extremely complex owing to their diversity, distribution and scale. Timely and trustworthy information on these supply chains is valuable to a wide range of user segments such as oil & gas operator and service companies, ESG investors, commodity traders, IGOs & NGOs, and regulators. This poster presentation introduces MANTIS and explains how valuable business insights relating to the oil & gas energy supply chain will be derived from its high-resolution optical imagery.
Satellite remote sensing is a well-established methodology for observing natural and anthropogenic terrestrial and atmospheric processes. The USGS-operated Landsat missions have kept a record of land cover change for 50 years while the Meteosats have observed the atmosphere for a similar period. The Landsats and Meteosats were designed with a clear objective in mind and have since been adapted for solving a wide range of opportunistic goals. Like changing land cover and the atmosphere, understanding something as critical and complex as the energy supply chain warrants a target-specific mentality to mission design. The MANTIS satellite will address a perceived gap in the availability of ultra-economic high-resolution and frequency optical imagery from which oil & gas infrastructure can be detected and classified. These data will be used in concert with a wide range of other Earth Observation (EO) data to derive detailed and timely insights on activity relating to oil & gas production. This topic is addressed in more detail under the ESA ARTES IAP Energy SCOUT project.
Initially, MANTIS will focus on the detection and classification of features and events related to onshore unconventional natural gas production from shale deposits. This form of production, more commonly known as ‘fracking’, is controversial owing to its significant surface footprint impacting on biodiversity, potential impact on the water table, and capacity to release methane, a potent greenhouse gas, into the atmosphere. Unconventional natural gas has however transformed the United States from a net energy importer to exporter (EIA AEO 2020 report), and with significant economically viable shale gas resources known to exist throughout the world, remains a highly topical subject.
[Image: image.png]Historic and forecast energy production and consumption in the United States. EIA Annual Energy Outlook (AEO) 2020.
The unconventional natural gas sector is extremely fast moving with wells being drilled and bought into production in a matter of weeks. Knowing where development and production are occurring and what stage the process is at is critical to understanding how these natural resources contribute to the energy mix and their impact on the environment. The MANTIS mission has been designed to monitor these processes at a spatial and temporal resolution deemed nominal to gain a deeper understanding of activity.
The MANTIS satellite is targeting a 515km, sun-synchronous orbit with a local time of the ascending node (LTAN) of 22:30. Images for the regions of interest will be acquired in the visible (RGB) and near-infrared (NIR) wavelengths. The payload onboard the Mantis mission (iSIM90-12U) offers the same spectral bands specified by ESA Sentinel-2 EO satellites (in the RGB & NIR).
The ground sampling stance (GSD) of the post-processed images, including degradation due to platform and orbital effects, will be 2.5m in RGB and 3.0m in NIR. The images will be characterised by a signal-to-noise ratio of 55 (for a solar elevation angle of 33.8 degrees) and a modulation transfer function (MTF) in the range 17-22%. The mission is being developed to achieve a geolocation accuracy requirement of no more than 100 metres. The aforementioned mission performance has been defined to enable the extraction of valuable information from the MANTIS imagery by means of Terrabotics’ detection and classification workflow.
The Areas of Interest (AOIs) targeted by the mission have been defined considering the regions of highest activity in the unconventionals-based energy supply chain. Short term variations in market demands are also satisfied by autonomous tasking based on inputs from the end user on new Points of Interest (POIs). End users will be able to submit requests for tasking the satellite by submitting these to Open Cosmos’s Mission Operations Centre. These requests will inform the definition of the image acquisition plan.
While imagery data will be available to purchase, the primary use of MANTIS imagery will be providing high-resolution imagery to the Terrabotics Energy SCOUT service. This high-resolution imagery will provide greater information content to Energy SCOUT end users through allowing the identification and classification of small scale events occurring at oil & gas production sites.
SATLANTIS is a European leader in HR and VHR Earth Observation capabilities offering an EO Space Infrastructure built around iSIM (the integrated Standard Imager for Microsatellites) optical-payload concept for small satellites. SATLANTIS offers an End-to-End solution from Upstream satellite development, launch, and operations to Downstream data generation, processing and delivery (e.g., data analytics for methane measurements).
SATLANTIS’ iSIM imager presents three main disruptive capabilities for optical payloads with low mass: enhanced spatial resolution, multispectrality and agility, able to have a relevant impact on several EO value-added applications.
URDANETA is the name of SATLANTIS’ first fully owned satellite, that will be launched in Q2 2022 with SpaceX’s Falcon 9 launcher. This satellite is incorporating an iSIM-90 imager onboard a 16U CubeSat sensor bus providing an innovative solution for several applications on Earth Observation. The main characteristics of URDANETA are: 17.4 kg mass, 4 spectral bands (RGB and NIR), 2.5 m satellite resolution in RGB and NIR, 14.3 km swath, pointing agility: > 1º/sec and 98 Mbps data download rate.
GEI-SAT constellation is the name of a family of four satellites that embarks SATLANTIS’ innovative solution for methane emissions detection and quantification. GEI-SAT pursues hot spot mapping of low emission levels of methane with a low-mass and low-cost very high-resolution multispectral SWIR camera onboard CubeSats & MicroSats.
The constellation consists of a first GEI-SAT Precursor, a 16U CubeSat (17.4 kg, ~150 kg/h detection threshold, 2.2m resolution @VNIR and 13m resolution @SWIR, up to 1700 nm) to be launched in Q3 2023; two more Microsats (92 kg, ~100 kg/h detection threshold, 0.8m resolution @VNIR and 7m resolution @SWIR , up to 1700 nm) to be launched in Q3 2024, and another Microsat expanding spectral capabilities and with similar resolution and better detection threshold (92 kg, ~50 kg/h (TBC) detection threshold, 0.8m resolution @VNIR and 9m resolution @SWIR , up to 2300 nm) to be launched in Q3 2025.
The LEO constellation composed by a CubeSat and three Microsats will employ robust and flight proven platforms compatible with small launchers. The operation lifetime will be of 4 years for the CubeSat, and >5 years for the Microsats. The Ground Segment will include the mission operations & control centre and the data processing and services centre.
The proposed methane detection method is the Multispectral Differential Photometry. It is carried out in collaboration with End users such as ENAGAS by taking images with several filters and, using the different signal values measured at the different wavelengths, obtaining the methane absorption. Before being able to do that, the acquired images have to be corrected for atmospheric effects using radiative transfer models in order to pass from detector units (e-/s) to column concentration units (ppb·m). Again, these concentration units have to be corrected for wind effects, using meteorological models in order to pass from column concentration units to flux units (kg/h).
GEI-SAT constellation will contribute to Improve annual reporting on methane emission with higher frequency measurements & prepare for global certification on CH4 emissions reduction in future legislation world-wide. The high spatial resolution that GEI-SAT provides in its SWIR channel, together with the geo-localization provided by its very high resolution for VNIR channel images, will allow an unprecedented ability (4 to 16 times better than other satellites) to distinguish the exact location of the methane leak or uncontrolled emission at a global scale. In order to achieve this object, the constellation will be operated in coordination with Sentinel-5P to perform an operational Tipping and Cueing capability for quantification of point source of CH4.
By enhancing EO capabilities from Space and pursuing climate Mitigation applications deriving from satellite-based observations, both satellite missions are based on innovative EO data processing from space and share the goals of a more sustainable life on Earth (showing the end-to-end service provider including space+ground segments), using SATLANTIS’ own innovative imagery pinpointing to specific application domains such as GHG emissions (GEI-SAT) and lands/water quality (URDANETA).
The BlackSky constellation, owned and operated by the US company BlackSky Inc. (NYSE: BSKY), whose data is distributed by Telespazio/e-GEOS in Europe and world-wide thanks to an agreement signed on 2018, is a new EO very high resolution optical (VHRO) mission designed with the goal of providing the highest daily revisit on the market at about 1mt resolution.
Currently the system is composed by eight satellites, launched since 2019; the two newest satellites, launched on November 18th, 2021, reached orbit and delivered first images within 14 hours of launch. The constellation is growing fast: additional two/four satellites are planned for launch within end of 2021, to be followed by other satellites of the same class (less than 60 kg mass) by mid 2022. The next achievement of a baseline of at least 16 operational satellites will allow a revisit of more than 8 acquisitions every day.
The orbital configurations of the constellation (including polar and inclined orbits) allows to achieve multiple acquisitions every day, during all daylight hours, with on-demand satellite tasking and fast access to the constellation available at multiple priority levels, granting a unique “First-to-know” advantage.
The imaging performance is based on a framing camera with color filter array that, in its latest version (thanks also to the position to a lower orbit), provides a sub-metric resolution over a n about 25 Km2 area.
The BlackSky image quality is monitored and calibrated continuously, as soon as new satellites become available, using specific calibration targets. e-GEOS will present some of its internal analysis on the images, as well as examples of operation tests that have already demonstrated a very short tasking lead-time and fast delivery timelines.
Gravity waves are important for atmospheric dynamics and play a major role in the mesosphere and lower thermosphere (MLT). Thus, global observations of gravity waves in this region are of particular interest. To resolve the upward propagation, a limb sounding observing system with high vertical resolution is developed to retrieve vertical temperature profiles in the MLT region. The derived temperature fields can be subsequently used to determine wave parameters.
The measurement method is a variant of Fourier transform spectroscopy: a spatial heterodyne interferometer is used to resolve rotational structures of the O$_2$ atmospheric A-band airglow emission in the near-infrared. It is visible during day- and night-time, allowing a continuous observation. The image is taken by a 2d detector plane containing of hundreds of pixels on one axis. The horizontal axis contains the interference pattern. The vertical axis corresponds to different tangent altitudes. Thus, it allows to resolve a vertical profile with one image. The method exploits the relative intensities of the emission lines to retrieve temperature. Thus, no absolute radiometric calibration is needed, which facilitates the calibration of the instrument. Silicon-based detectors, like CCD or CMOS, can be used. These can operate in ambient conditions and do not require active cooling devices. This allows to deploy this instrument on a nano or micro satellite platform such as CubeSats.
After a successful in-orbit demonstration of the measurement technology in 2018, this instrument will be developed next within the International Satellite Program in Research and Education (INSPIRE). The European Commission has preselected the instrument for an in-orbit validation to demonstrate innovative space technologies within its H2020 program.
Following the success of the PHI-Sat mission, in 2020, the European Space Agency (ESA) announced the opportunity to present CubeSat-based ideas for the PHI-Sat-2 mission to promote innovative technologies such as Artificial Intelligence (AI) capabilities onboard Earth Observation (EO) missions.
The PHI-Sat-2 mission-idea, submitted jointly by Open Cosmos and CGI, leverages the latest research and developments in the European ecosystem: a game-changing EO CubeSat platform capable of running AI Apps that can be developed, uploaded and deployed and orchestrated on the spacecraft and updated during flight operations. This approach allows continuous improvement of the AI model parameters using the very same images acquired by the satellite.
The Development is divided into two sequential phases: the Mission Concept Phase, which is almost completed, which shall demonstrate the readiness of the mission by demonstrating the innovative EO application through a breadboard base validation test and a Mission Development Phase, which shall be dedicated to the design and development of the space and ground segments, launch, in-orbit operations, data exploitation, and distribution.
The PHI-Sat-2 Mission, lead by OpenCosmos, will be used to demonstrate the AI enabling capability for new useful innovative EO techniques of relevance to EO user communities. The overall objective is to address innovative mission concepts, fostering novel architectures to meet user-driven science and applications by means of on-board processing. The latter will be based on state-of-the-art AI techniques and on-board AI-accelerator processors
The mission will take advantage of the latest research for mission operations of CubeSats and use the NanoSat MO Framework, that allows software to be deployed in space as simple Apps, in a similar fashion to Android apps, previously demonstrated in ESA’s OPS-SAT mission, and supports the orchestration of on-board Apps.
Φ-sat-2 will a set of default AI Apps, which will cover different ML approaches and methodologies such as supervised (image segmentation, object detection) and unsupervised learning (with auto encoders and generative networks) and presented below.
Since the Φ-sat-2 mission relies on an optical sensor, the availability of a Cloud Detection App –Develop by KP-Labs- which will generate cloud mask and identify cloud free areas is a baseline. But this information can be exploited by the other Apps and this is not only relevant for onboard resources optimization, but also will demonstrate the AI Apps pipeline onboard;
Autonomous Vessel Awareness App –developed by CEiiA- - will detect and classify vessels. Together with the demonstration of the possibility to perform scouting with a sensor with wider swath width, this App will demonstrate how information generated in space can be exploited for mission operation e.g. in a satellite constellation to identify areas for the next acquisitions.
The Sat2Map App – developed by CGI- transforms a satellite image to a street map in emergency field using Artificial Intelligence. The software takes advantage of the Cycle-Consistent Adversarial Networks (CycleGAN) technique to do the transformation from the satellite image to the street map. This App will enable the satellite to provide to rescue teams on ground in case of emergency (Earthquake, flood etc.) real time of still available and accessible street.
High Compression App developed by Geo-K- will exploit deep auto encoders to push AI image compression on-board and on ground reconstruction. The performances of the App will not be measured only in terms of standard compression rate VS image similarity but also in terms of how the reconstructed image will be exploitable by other Apps e.g. for objects recognition pushing the limit of image compression in space based on AI and reconstruction on the ground.
On top of this, the mission will be open to applications developed by third parties and this augment the disruptiveness of a new mission concept where the Satellite can be seen available to a community already in space for research and development as a commodity. Those third party APPs can then be uploaded and started/stopped on demand. This concept is extremely powerful enabling future AI software to be developed and easily deployed in the spacecraft. This will represent an enabler for in-flight on-mission continuous learning of the AI networks.
The presentation aims to describe the PHI-Sat-2 mission objectives and how the different AI applications, orchestrated by the NanoSat MO Framework, will demonstrate the disruptive advantages that the onboard AI brings to the mission.
There is a growing interest in miniaturised lightweight coste-effective scientific-grade multi-spectral imagers for high radiation environments on low-Earth orbit. Over the past decade numerous CubeSat cameras have been launched both on experimental student satellites as well as on operational constellations of Earth Observation CubeSats. These instruments have primarily been designed for recording visually good-looking images without any particular emphasis on the radiometric quality of the data.
We are presenting THEIA, an Earth Observation Imager being developed by the University of Tartu, capable of providing scientific-grade data suitable for quantitative remote sensing studies. It makes use of two sensors and optical beam splitting technology to separate two spectral bands. It is able to deliver radiometrically calibrated imagery thanks to an on-board calibration unit, thus offering the possibility to provide complementary data to large Earth Observation missions such as Sentinel-2.
THEIA can be used on small standardised CubeSats and up to satellites with any size and shape, is radiometrically calibrated and applicable for quantitative remote sensing. It can also be used on manned/unmanned aerial vehicles where miniaturisation helps to save mass and volume.
The imager is designed in cooperation with ESA under the Industry Incentive Scheme and the General Support Technology Programme.
Current methods to remotely identify and monitor thermal energy emissions are limited and costly. Manual inspection remains the most common but can become time-consuming and complex to undertake depending on how spread out the assets are.
In 2022 SatelliteVu will launch the world’s first commercial constellation of high-resolution thermal imaging satellites. Constructed in the UK, the constellation will be capable of resolving building level measurements providing an accurate determination of relative temperature, at multiple times of day or night. This unique technology will help us better understand change and activity within the built and surrounding natural environment that traditional visible wavelength imagery will not detect. High spatial resolution Medium Wave InfraRed (MWIR) imagery provides several key differentiators to visible imagery (VIS) and has the potential to become a high-value data product for the EO market:
· The vast majority of currently available imagery in the visible waveband is captured at mid-morning or mid-afternoon local times due to the reliance on good illumination conditions. In particular, no images can be captured during the night. MWIR imagery overcomes this limitation as the detectable signal only depends on the temperature of the scene hereby enabling imaging at any local time.
· The ability to contrast the relative temperature of the target objects will provide information on items that would otherwise be invisible such as energy efficiency of buildings or outflows of pollution into the rivers and sea.
o MWIR data will also provide insight into the level of human activity within a scene, for example, determining which buildings are occupied and sources of waste energy.
o It is also possible to gain a level of temporal information by monitoring temperature changes.
Very little civilian MWIR EO data is available and almost all are medium to low resolution (between ~1000 to 3000m GSD), which is too coarse to distinguish the finer details that enable high-value applications. The key to providing data products with maximised utility in MWIR is to produce high-resolution data at low cost. This translates into a requirement for a high-performance MWIR imager delivering a small GSD and fitting in a sufficiently small, low cost and agile platform to enable the deployment of constellations. The Satellite Vu constellation will achieve a sub-4m GSD pixel and will be accommodated on a spacecraft with a launch mass of about 130kg. This will enable a low enough price per spacecraft to make building constellations an attractive and worthwhile commercial investment.
The presentation will detail the Satellite Vu constellation capabilities and explore how high-resolution intra-day thermal satellite imaging will impact our ability to monitor energy use and environmental change on a global scale.
The Amazon rainforest is the largest moist broadleaf tropical forest on the planet and plays a key role for regulating environmental processes on the Earth. It is a crucial element in the carbon and water cycles and acting as climate regulator, e.g. by absorbing CO2 and producing about 20% of the Earth's oxygen and this way counteracting global warming. The monitoring of changes in such forested areas, as well as understanding the water dynamics in such a unique biome is of key importance for our planet. Synthetic Aperture Radar (SAR) systems, thanks to its capability to see through the clouds, are an attractive alternative to optical sensors for remote sensing over such areas, which are covered by clouds for most of the year.
From TanDEM-X acquisitions it is possible to derive amplitude as well as bistatic coherence images. By exploiting the interferometric coherence, and specifically the volume correlation factor, it is possible to distinguish forested areas from non-vegetated ones, as demonstrated for the generation of the global TanDEM-X Forest/Non-Forest Map, that was based on a supervised clustering algorithm [1]. The interferometric coherence was also the main input for global water mapping using a watershed segmentation algorithm, as shown in the production of the TanDEM-X Water Body Layer [2]. On both global products, provided at a resolution of 50 m x 50 m, it was necessary to mosaic overlapping acquisitions to reach a good final accuracy.
Deep learning methods, concretely the U-Net presented in [3], showed promising results to accurately distinguish forested areas on a limited set of single TanDEM-X full-resolution images at 12 m x 12 m. In the actual study for forest and water monitoring over the Amazon rainforest, this U-Net architecture has been used as base to extend the capabilities of deep learning methods to work with TanDEM-X images acquired with a larger variety of acquisition geometries and to provide large scale maps including forest and water detection. The height of ambiguity (related to the perpendicular baseline) and the local incidence angle have been included in the input features set as the main descriptor of the bistatic acquisition geometry. The U-Net has been trained from scratch to avoid any type of transfer learning from previous works, by implementing an ad-hoc strategy which allows the model to generalize well on all different acquisition geometries. Mainly images acquired in 2011 and 2012, representing the high variability in the interferometric acquisition geometries, have been used for the training, in order to minimize the temporal distance to the used independent reference, a forest map based on Landsat data from 2010. The selected images for training and validation of the U-Net, as well as the selected images for testing, cover the three ranges of imaging incidence angles as in [1], as well as heights of ambiguity between 20 m and 150 m. Special attention was paid to balance the three different classes, forest, non-forest and water, in each one of the combined ranges of imaging incidence angle and height of ambiguity.
By applying the proposed method on single TanDEM-X images, we achieved a significant performance improvement in the test images with respect to the clustering approach developed in [1], with a f-score increase of 0.13 for the forest class. An improvement of the classification of forest with the CNN is overall observable, but especially noticeable over dense forested areas (percentage of forest samples > 70%). Also, the classification approach with deep learning methods can be extended to images acquired with a height of ambiguity > 100 m, which was a limitation of the clustering approach shown in [1]. Indeed, with the clustering approach images acquired with high height of ambiguity values resulted in an ambiguous forest classification, due to the smaller perpendicular baselines between the satellites, which reduce the volume decorrelation.
Such improvements make it possible to extend the number of useful TanDEM-X images and allows us to skip the weighted mosaicking of overlapping images used in the clustering approach for achieving a good final accuracy at large scale. Moreover, no external references are necessary either to filter out water bodies, as for the forest/non-forest map in [1]. In this way, we were able to generate three time-tagged mosaics over the Amazon rainforest utilizing the nominal TanDEM-X acquisitions between 2011 and 2017, just by averaging the single image maps classified by the ad-hoc trained CNN. These mosaics can be exploited to monitor the changes over the Amazon rainforest over the years and to follow deforestation patterns and changes in river bed extensions. By increasing the number of TanDEM-X acquisitions over the Amazonas and by applying the trained CNN it will be possible to perform a near real-time forest monitoring over selected hot-spot areas and to easily extend such a classification approach to other tropical forest areas.
[1] M. Martone, P. Rizzoli, C. Wecklich, C. Gonzalez, J.-L. Bueso-Bello, P. Valdo, D. Schulze, M. Zink, G. Krieger, and A. Moreira, “The Global Forest/Non-Forest Map from TanDEM-X Interferometric SAR Data”, Remote Sensing of Environment, vol. 205, pp. 352–373, Feb. 2018.
[2] J.L. Bueso-Bello, F. Sica, P. Valdo, A. Pulella, P. Posovszky, C. González, M. Martone, P. Rizzoli. “The TanDEM-X Global Water Body Layer”, 13th European Conference on Synthetic Aperture Radar, EUSAR, 2021.
[3] A. Mazza, F. Sica, P. Rizzoli, and G. Scarpa, “TanDEM-X forest mapping using convolutional neural networks”, Remote Sensing MDPI, vol. 11, 12 2019.
Floods are the most frequent, costliest natural disasters having devastating consequences on people, infrastructure, and the ecosystem. During flood events near real-time satellite imagery has proven to be an efficient management tool for disaster management authorities. However, one of the challenges is accurate classification and segmentation of flooded water. The generalization ability of binary segmentation using threshold split-based method, is limited due to the effects of backscatter, geographical area, and time of image collection. Recent advancements in deep learning algorithms for image segmentation has demonstrated excellent potential for improving flood detection. However, there have been limited studies in this domain due to the lack of large scale labeled flood event dataset. In this paper, we present different deep learning approaches, like a SegNetfirst model, a UNet and thirdly, a Feature Pyramid Network (FPN), with the UNet and FPN having a backbone of EfficientNet-B7. We leverage multiple publicly available Sentinel-1 datasets like the data provided jointly by NASA Interagency Implementation and Advanced Concepts Team, and IEEE GRSS Earth Science Informatics Technical Committee, the Sen1Floods11 dataset and another Sentinel-1 based flood dataset developed by the DLR. All the datasets were labelled differently, some based on Sentinel-2 data and others were hand-labelled. The performances of all the models based on the different datasets were evaluated with multiple training, testing, and validation.
Sentinel-2 is the European flagship Earth Observation satellite optical mission for remote sensing over land. Developed by the European Space Agency (ESA), Sentinel-2 aims at providing systematic global acquisitions of high-resolution optical data for applications such as vegetation monitoring, land use, emergency management and security, water quality and climate change. Such operational mission needs efficient and accurate data processing algorithms to extract the final mission products, i.e. surface bio-/geo-physical parameters. One of the most critical data processing steps is the so-called atmospheric correction. This correction aims at compensating the atmospheric scattering and absorption effects from the measured Top-Of-Atmosphere (TOA) radiance and invert the surface reflectance. ESA developed and maintains the Sen2Cor processor, a collection of physically-based algorithms tailored for processing Sentinel-2 TOA radiance that retrieves atmospheric properties (water vapor and aerosols) and inverts surface reflectance. Sen2Cor atmospheric correction relies on the use of libRadtran, a state-of-the-art Radiative Transfer Model (RTM) that accurately models the processes of scattering and absorption of electromagnetic radiation though the Earth’s atmosphere. Since the computational cost of libRadtran makes it impractical for routine applications, Sen2Cor overcomes this limitation by implementing an interpolation of a set of look-up tables (LUT) of precomputed libRadtran simulations, resampled to the 13 Sentinel-2 spectral channels. However, over a million simulations are still needed to achieve sufficient accuracy, with the consequent impact in data storage and computation time for LUT generation.
In the recent years, the emulation of RTMs have been proposed as an accurate and fast alternative to LUT interpolation. An emulator is a statistical (machine) learning model that approximates the original deterministic model at a fraction of its running time and thus, in practice, being conceptually the same as LUT interpolation. In this work, we aim at performing an exhaustive validation of the emulation method applied to the atmospheric correction of Sentinel-2 data. We used Gaussian Process regression as the core of our emulators and principal components analysis to reduce the dimensionality of the RTM spectral data. Our spectrally-resolved emulator was trained with as little as 1000 libRadtran simulations. The emulator method was validated in three test scenarios: (1) using a simulated dataset of libRadtran simulations, (2) against RadCalNet field measurements, and (2) comparison against Sen2Cor for the atmospheric correction of Sentinel-2. In all the test scenarios, the surface reflectance was inverted with average relative errors below 2% (absolute errors below 0.01) in the entire spectral range, and showing a good agreement with Sen2Cor results. Our validation results indicate that emulators can be used in operational atmospheric correction of Sentinel-2 multi-spectral data, offer improvements of the current Sen2Cor processor and find wide application in other sensors with similar characteristics. Indeed, with only a small training dataset being required, emulators can be used to add new aerosol models in the Sen2Cor processor. In addition, working with spectrally-resolved emulated data would allow us to better model instrumental effects such a smile. These improvements would be impractical with precomputed LUTs due to the large number of simulations needed.
In this presentation, we will give an insight of the implemented emulation methodology and show our validation test results with Sentinel-2 data. With this, we expect to inform the remote sensing community about the current advances in machine learning emulation for operational atmospheric correction of satellite data, as well promote discussion within the machine learning community to further improve these statistical regression models. Moreover, we envisage that emulators can potentially offer practical solutions for the atmospheric correction to address the challenges of future ESA’s CHIME and FLEX hyperspectral missions.
Deadwood, both standing and fallen, are important components for the biodiversity of boreal forests, as it offers a home for several endangered species (such as fungi, mosses, insects and birds). According to the State of Europe’s Forests 2020 report, Finland ranks on the bottom among the European countries in the amount of both standing and fallen deadwood (m³/ha), with only 6 m³/ha of deadwood on average . There are, however, large differences between different forest types, as non-managed old-growth forests have several times more decaying wood compared to managed forests. There is a severe lack of stand-level deadwood data in Finland, as the Finnish national forest inventory focuses on large scale estimates, andin the forest inventories aiming for operative forest data the deadwood is not measured at all. As the amount of deadwood (t/ha) is proposed to be one of the mandatory forest ecosystem condition indicators in the Eurostat legal proposal and in the national biodiversity strategy, there is an increasing need for accurate stand-level deadwood data.
Compared to most other forest variables, estimating the amount of deadwood is far more challenging. As the generation of deadwood in the forest is a stochastic process that is difficult to model. Building accurate models for deadwood estimation is especially difficult for managed forests, as harvesting the trees effects on how much of the deadwood is generated. Because of these factors, having reliable estimates for the amount of deadwood require much more field observations compared to estimates for the growing trees. Right now, the only way to get accurate estimations of deadwood are direct measurements in the field, which are both time-consuming and expensive. Due to this, developing new and improved field data collection methods is required.
In the recent decade, computer vision methods have advanced rapidly, and they can be used to automatically detect and classify individual trees from high quality Unmanned Aerial Vehicle (UAV) imagery accurately. This makes it possible to better utilize UAVs for field data collection, as the UAV data are spatially continuous, already georeferenced and cover larger areas compared to traditional field work. UAVs are also only method for remotely mapping small objects, such as deadwood, as even the most spatially accurate commercial satellites provide 30cm ground sampling distance, compared to less than 5 cm that is easily achievable with UAVs. It is worth noting, though, that the spatial coverage of UAVs is not feasible for operational, large-scale mapping, and that the information that can be extracted from aerial imagery is limited to what can be seen from above, as much of the forest floor is obscured by the canopy. Nevertheless, even with these shortcomings, we consider efficient usage of UAVs to be valuable for field data collection, especially when the variables of interest are, for instance, distributions of different tree species and deadwood.
Our first study area is in Hiidenportti, Eastern-Finland, from where we have collected 10km² of UAV data with around 4cm ground sampling distance, as well as extensive and accurately located field data for standing and downed deadwood. Our other study area is in Evo, Southern-Finland, from which we have several RGB UAV images with ground sampling distances varying from 1.3 to 5 cm. The total area covered by Evo data is around 20km². In Evo, our field data consists of field plot data with plot-level deadwood metrics among the collected features. Both of our study areas contain both managed forests as well as conservation areas, offering a representative sample of different Finnish forest types.
In this study, we apply state-of-the-art instance segmentation method, Mask R-CNN, to detect both standing and fallen deadwood from RGB UAV imagery. Using only the field plot data is not sufficient for our methods , as training deep learning models requires large amounts of training data. Instead, we utilize expert-annotated virtual plots to train our models. We extract 90x90 meter square patches that are centered around the field plot locations, and all standing and fallen deadwood present in these plots are manually annotated. In the case of overlapping virtual plots, we extract rectangular are that contains each of these plots. These data are then mosaicked to smaller images and used to train the object detection models. We are using only the data from Hiidenportti to train our models and use the data from Evo to evaluate how these methods work outside of the geographical training location.
We compare our results with both the expert-annotated virtual plots as well as with accurate field-measured plot level data. We evaluate our models with the common object detection metrics, such as Average Precision and Average Recall. We also compare the results with different plot-level metrics, such as the total number of deadwood instances and the total length of downed deadwood, and estimate how much of the deadwood present in the field can be detected from aerial UAV imagery and what factors (such as canopy cover, forest type and deadwood dimensions and decaying rate) affect the detections. According to our preliminary results, the models are able to correctly detect around 68% of the annotated groundwood instances, and there are several cases where the model detects instances the experts have missed.
Nowadays, modern Earth observation systems continuously collect massive amounts of satellite information that can be referred to as Earth Observation (EO) data.
A notable example is represented by the Sentinel-2 mission from the Copernicus programme, supplying optical information with a revisit time period between 5 and 10 days thanks to a constellation of two twin satellites. Due to the high revisiting period exhibited by such satellites, the acquired images can be organized in Satellite Image Time Series (SITS), which represent a practical tool to monitor a particular spatial area through time. SITS data can support a wide number of application domains like ecology, agriculture, mobility, health, risk assessment, land management planning, forest and natural habitat monitoring and, for this reason, they constitute a valuable source of information to follow the dynamic of the Earth Surface. The huge amount of regularly acquired SITS data opens new challenges in the field of remote sensing in relationship with the way the knowledge can be effectively extracted and how spatio-temporal interplay can be exploited to get the most out of such a rich information source.
One of the main tasks related to SITS data analysis is associated to land cover mapping, where a predictive model is learnt to make the connection between satellite data (i.e., SITS) and the associated land cover classes. SITS data captures the temporal dynamics exhibited by land cover classes, thus supporting a more effective discrimination among them.
Despite the increasing necessity to provide large scale (i.e., region or national) land cover maps, the amount of labeled information collected to train such models is still limited, sparse (annotated polygons are scattered all over the study site) and, most of the time, at coarser scale with respect to pixel precision. This is due to the fact that the labeling task is generally labour-intensive and time costly in order to cover a sufficient number of samples with respect to the extent of the study site.
Object Based Image Analysis (OBIA) refers to a category of digital remote sensing image analysis approaches that study geographic entities, or phenomena through delineating and analyzing image-objects rather than individual pixels. When dealing with supervised Land Use / Land Cover (LULC) classification, the recur to OBIA approaches is motivated by the fact that, in modern remote sensing imagery, most of the common land cover classes present an heterogeneous radiometric composition, and classical pixel-based approaches typically fail to capture such complexity. Of course, this effect is even more important when the aforementioned complexity is exhibited also in the temporal dimension, which is the case for SITS data.
To address this issue, in the OBIA framework, the main idea is to group adjacent pixels together prior to the classification process, and subsequently work on the so-obtained object layer in which segments correspond to more representative samples of such complex LULC classes (e.g. ``land units''). This is typically achieved by tuning the segmentation algorithms to provide object layers at an appropriate spatial scale, at which objects are generally not radiometrically homogeneous, especially on the most complex LULC classes. Matter of facts, most of the common segmentation techniques used in remote sensing allow for the parametrization of the spatial scale, e.g. by using an heterogeneity threshold as in, by defining a bandwith parameter specifically for the spatial domain as in Mean-Shift or, recently, by specifying the number of required objects as in SLIC.
Based on these assumptions, the typical approach in the OBIA framework for automatic LULC mapping is to leverage agglomerate descriptors (i.e. object-based radiometric statistics) to build proper samples for training and classification, without explicitly managing within-object information diversity. For instance, a single segment derived by an urban scene: this typically contains, simultaneously, sets of pixels associated to buildings, streets, gardens, and so on, which are all equivalently important in the recognition of the Urban LULC class. However, in many cases, the components of a single segment do not equally contribute to their identification as belonging to a certain land-cover class.
In this abstract, we propose TASSEL, a new deep-learning framework to deal with object-based SITS land cover mapping which can be ascribed into the weakly supervised learning (WSL) setting. We locate our contribution in the framework of WSL since the object-based land cover classification task exhibits label information that intrinsically brings a certain degree of approximation and inaccurate supervision to train the corresponding learning model, related to the presence of non-discriminative SITS components within a single labelled object.
The architecture of our framework is depicted in the first image associated with this abstract: firstly, the different components that constitute the object are identified. Secondly, a CNN block extracts information from each of the different object components. Then, the results of each CNN block are combined via attention. Finally, the classification is performed via dedicated Fully Connected layers. The outputs of the process are the prediction for the input object SITS as well as the extra information alpha that provides information related to the contribution of each object component.
Our framework includes several stages: firstly, it identifies the different multifaceted components on which an object is defined. Secondly, a Convolutional Neural Network (CNN) extracts an internal representation from each of the different object components. Here, the CNN is especially tailored to model the temporal behavior exhibited by the object component.
Then, the per component representation is aggregated together and used to provide the decision about the land cover class of the object. Beyond the pure model performance, our framework also allows us to go a step further in the analysis, by providing extra information related to the contribution of each component to the final decision. Such extra information can be easily visualized in order to provide additional feedback to the end user, supporting spatial interpretability associated with the model prediction.
In order to assess the quality of TASSEL, we have performed extensive evaluation on two real-world scenarios over large areas with contrasted land cover features and characterized by sparsely annotated ground truth data. The evaluation is conducted considering state of the art land cover mapping approaches for sparsely annotated data in the OBIA framework. Our framework gains around 2 points, on average, of F-Measure with respect to the best competing approaches demonstrating the added value to explicitly manage the intra-object heterogeneity.
Finally, we perform a qualitative analysis to underline the ability of our framework to provide extra information that can be effectively leveraged to support the comprehension of the classification decision. The second image of the associated image file represents an example where the extra information supplied by TASSEL is used to interpret the final decision. The yellow lines represent object contours. The example refers to the Annual Crops land cover class. The legend on the right reports the scale (discretized considering quantiles) associated with the attention map. Here, we can note that TASSEL assigns more attention (dark blue) to the portion of the object directly related to the Annual Crops land cover class while lower attention (light blue) is assigned to the Shea Trees that are not representative of the Annual Crops class .
To summarize, the main contributions of our work can be summarized as follows:
i) We propose a new deep-learning framework to cope with object-based SITS classification devoted to manage the within-object information diversity exhibited in the context of land cover mapping; ii) We design our framework with the goal to provide as outcomes not only the model decision but also extra information that can provide insights about (spatial) model interpretability and; iii) We conduct an extensive evaluation of our framework considering both quantitative and qualitative analysis on real-world benchmarks that involve ground truth data collected during field campaigns and featured by operational constraints.
Since the 1990s, the melting of Earth’s Polar ice sheets has contributed approximately one-third of global sea level rise. As Earth’s climate warms, this contribution is expected to increase further, leading to the potential for social and economic disruption on a global scale. If we are to begin mitigating these impacts, it is essential that we better understand how Earth’s ice sheets evolve over time.
Currently, our understanding of ice sheet change is largely informed by satellite observations, with the longest continuous record coming from the technique of satellite altimetry. These instruments provide high-resolution measurements of ice sheet surface elevation through time, allowing for estimates of ice sheet volume change and mass balance to be derived. Satellite radar altimeters work by transmitting a microwave pulse towards Earth’s surface and listening to the returned echo, which is recorded in the form of discrete waveforms that encode information about both the ice sheet surface topography and its electromagnetic scattering characteristics. Current methods for converting these waveforms into elevation measurements typically rely on a range of assumptions that are designed to reduce the dimensionality and complexity of the data. As a result, subtle, yet important, information can be lost.
A potential alternative approach for information extraction comes in the application of deep learning algorithms, which have seen enormous success in diverse fields such as oceanography and radar imaging. Such approaches allow for the development of singular, data-driven methodologies that can bypass the many, successive, human-engineered steps in current processing workflows. Despite this, deep learning has yet to see application in the context of ice sheet altimetry. Here, we are therefore interested in exploring the potential of deep learning to extract deep and subtle information directly from the raw altimeter waveforms themselves, in order to drive new understanding of the contribution of polar ice sheets to global sea level rise. In this presentation we will provide first results from our preliminary analysis, together with a roadmap for the planned activities ahead.
Essential for forest management is the availability of a complete and up to date forest inventory. Typically forest inventories store information about forest stands, these are roughly uniform areas within the forest that are managed as a single unit. One of the most important parameters of the forest stand is the volumetric tree species distribution. Within Norway there are three main tree species used for production: Norwegian Spruce, Scots Pine and Birch. Currently the determination of the tree species distribution per stand is done manually. The inspection is done by a forestry expert mostly by visual interpretation of aerial imagery and in some cases lidar data. The tree species mapping is therefore expensive, error prone and time consuming, as a result forest inventories are often incomplete and/or outdated.
Deep learning (DL) is getting ubiquitous in state of the art land cover classification. Previous approaches on tree species detection in Norway either used classic machine learning approaches, were evaluated on small areas and haven’t considered label noise and limited data. Currently S&T is already exploiting CNNs for the segmentation of aerial imagery to derive tree species, however there are several challenges.
First of all, aerial imagery in Norway is only available approximately every 5th year. Although aerial imagery provides very high spatial resolution of around 0.2m, the spectral and temporal resolution is limited. Sentinel-2 (S2) could complement aerial imagery by providing a higher spectral and temporal resolution. Especially birch stands could potentially be distinguished by tracking spectral change throughout the year.
Another major challenge is the availability and quality of reference data. Although data is available for different municipalities across the country, there are large areas without labeled data, furthermore existing labels are imperfect containing some degree of noise. The limited quantity and quality of reference data is a challenge in general when working with earth observation and deep learning.
Noise robust and semi-supervised training schemes could address the limited quality and quantity of reference data. Recent developments of semi-supervised learning in other fields, such as image classification and natural language processing, show very promising results. However, the usefulness of these approaches have not yet been fully explored in earth observation.
This project builds upon previous efforts and tries to address the challenges described above. The main objective is to improve automated tree species classification from remotely sensed data over Norwegian production forests by exploiting advanced DL techniques. Secondary objectives are: 1) exploiting S2 for improved birch detection 2) investigate noise detection and noise robust techniques for handling limited quality reference labels 3) investigate semi-supervised techniques for handling limited quantity reference labels.
The main approach will be to train various relatively standard CNN baseline models and compare different improved models to these baselines in order to evaluate the impact of different techniques. The study focuses on 3 main things:
1) Sentinel-2: The incorporation of S2 as a data source in addition to aerial imagery. This will be done by fusing S2 and aerial imagery and training a model on the combined dataset. Fusing will be done either by resampling to the same grid or designing a custom CNN where S2 data enters the network at a deeper stage after several pooling layers.
2) Noise detection and noise robust training: Multiple models will be trained with different amounts of artificial noise added to the training data using both a standard and noise robust training scheme. By comparing the standard training scheme with the noise robust scheme the effectiveness of noise robust training can be evaluated. In addition, area under the margin (AUM) ranking will be used to identify mislabeled data.
3) Semi-supervised: Multiple models will be trained on training sets of reduced size, e.g. reduction by 20%, 40%, 60%, etc. Secondly, the unused training data will be added as unlabeled samples to the training scheme, recovering some of the accuracy loss originating from the reduced amount of labels. In this way the effectiveness of the semi-supervised approach can be evaluated. One particular semi-supervised approach that will be evaluated is the consistency loss.
The direct impact of this study will be improved tree species detection over Norway. However more importantly the study aims to contribute to the more general challenges of dealing with limited quantity and quality reference data within DL for earth observation.
The final results of the project will be published in a peer reviewed scientific journal. The project kicked off in October 2021 and it will last one year.
Due to the size of the acreage and importance in the production of food, feed and raw materials, agricultural land is an appropriate target for RS applications. Additionally, agricultural production is affected by a variety of spatially and temporally varying environmental factors (e.g., diseases and water content), which ensure stable, renewable production of high-quality food, raw materials, and bioenergy. Environmental changes and increasing extreme weather events are also putting a strain on production conditions. Therefore, the application-oriented provision of information is a key prerequisite for a flexible and fast reaction of farmers to the changing environmental conditions.
Against this background, technologies are being adapted and developed that enable the rapid identification and classification of objects and phenomena. In agriculture, this often involves identifying agricultural crops and their growth development in order to plan and effectively implement suitable agronomic measures.
For this purpose, a processing chain was developed whose core routine for analyzing multitemporal data of the Sentinel-2 satellite is based on machine learning methods (Random Forest, XGBoost, Neural Network, SVM). As validation basis for developing our method, the land parcel shape data of the land survey and geo-spatial information office of the federal state of Brandenburg were used as ground truth data. These data are based on farmers’ reports on agricultural subsidy applications (Common Agricultural Policy - CAP of the European Union - EU) for the agricultural areas of 2018. The number of remote sensing data sets amounted 343 scenes and their meta data and was available for the whole federal state Brandenburg.
The results of our investigations can be summarized as follows:
1. The testing methodology has shown that dividing the study area into training areas and test areas is a solid way to validate the model. Simple training on the entire data set is insufficient to build a model that can classify crops in new regions of the federal state Brandenburg.
2. Natural influence factors such as phenological grow stages, regional environmental conditions, data quantity of collected cloud-free observations in each region, and the complex spectral variety in each region, making it challenging to train a model that can generalise the training data well.
3. Furthermore, the test methodology provides a framework for models such as Random Forest, XGBoost, Neural Network, SVM, but also any other classification system.
The core of our results is an integrated testing methodology that validates the generalizability of trained machine learning models and provides conclusions about how well crops can be identified in previously new regions.
Classical machine learning algorithms, such as Random Forests or Support Vector Machine, are commonly used for Land Use and Land Cover (LULC) classification tasks. Land cover indicates the type of surface, such as forest, agriculture or urban, whereas land use indicates how people are using the land. Land cover can be determined by the reflectance properties of the surface. This information is commonly extracted from aerial or satellite imagery whose pixel values represent the solar energy reflected by the Earth’s surface in different spectral bands. On the other hand, spectral data at the pixel level alone cannot provide information about the land use and a patch image has to be considered in its entirety to infer its use. Often also additional information is required to disambiguate among all the possible uses of a land. The purpose of this work was to study the accuracy of Convolutional Neural Networks to learn the spatial and spectral characteristics of image patches of the Earth surface, extracted from Sentinel-2 satellite images for LULC classification tasks. A Convolutional Neural Network that can learn how to distinguish different types of land covers, where geometries and reflectance properties can be mixed in many different ways, requires an architecture with many layers to achieve a good accuracy. Such architectures are expensive to train from scratch in terms of amount of labeled data needed for training, and also in terms of time and computing resources. It is nowadays normal practice in computer vision to reuse a model that has been pretrained on a different but large set of examples, such as ImageNet, and finetune this pretrained model with data that is specific to the task at hand. Fine-tuning is a transfer learning technique in which the parameters of a pretrained neural network architecture are updated using the new data. In this work we have used the ResNet50 architecture, pretrained on the ImageNet dataset and finetuned with the EuroSAT dataset, a set of 27000 patch images, extracted from Sentinel-2 images, containing 13 spectral bands, from the visible to the short wave infrared, with 10 m. spatial resolution, divided in 10 classes. In order to further improve the classification accuracy, we have used a data augmentation technique to create additional images from the original EuroSAT dataset by applying different transformations such as flipping, rotation and brightness modification. Finally, we have analyzed the accuracy of the fine-tuned CNN to detect changes in patch images that were not included in the EuroSAT dataset. A change in a patch image is represented by a change in the probability values for each class. Since ImageNet has been pretrained using images with only the three RGB bands, the other bands available from the Sentinel-2 MSI products and in the EuroSAT images are not used. In order to investigate the accuracy that can be achieved using additional bands available in the EuroSAT dataset, we have trained smaller CNN architecture from scratch using only the EuroSAT dataset and compared the results with that from the ResNet50 architecture pretrained with the ImageNet dataset.
Preservation of historic monuments and archaeological sites has a strategic importance for maintaining local cultural identity, encouraging a sustainable exploitation of cultural properties and creating new social opportunities. Cultural heritage objectives are often exposed to degradation due to natural and anthropogenic impacts.
With its main objective being transferring research-based knowledge into operational environments, AIRFARE, a national funded project lead by GMV Romania, intends to implement, test and promote responsiveness solutions for effective resilience of cultural heritage sites against identified risks by exploiting the benefits of Earth Observation data wide availability and capabilities.
At a first iteration with potential users involved in cultural sites management in Romania, they manifested most interest for change detection capabilities to prevent illegal dumping of waste, illegal building and changes of land use/land cover within boundaries of large heritage sites (such as old fortresses), which often contain private owned properties with special construction regime. A monitoring service that would provide warnings in a timely manner to support intervention should be able to ensure at least monthly updates of information. While temporal resolution of Sentinel-2 data can easily respond to user needs in terms of frequency, the spatial resolution of 10 m provides limited capabilities in detecting changes that can be indicators of illegal activities at detailed scales: occurrence of new roads, new buildings, non-compliant waste sites on public areas and changes of land cover or destination within private properties. While very high resolution imagery would cover the needs in terms of spatial resolution, frequent acquisition costs are too prohibitive and would substantially reduce the economic benefits of the proposed solution.
In order to meet user requirements for spatial and temporal resolution, we employed a Super-Resolution Generative Adversarial Network (SR-GAN) inspired algorithm trained on SPOT-6 data to upscale and enhance Sentinel-2 imagery. The particularity of the model that we selected is that the loss function calculation is based on VGG network feature maps, which leads to a decreased sensitivity of the model to changes in pixel space.
As an initial approach, we used very high resolution SPOT imagery acquired over five cultural sites in Romania during each season of a year. Sentinel-2 data that was used for the initial training of the model was acquired in the same period as the SPOT images, in an attempt to reduce potential inconsistencies caused by changes in seasons between corresponding training datasets. The first results of the approach produced a year-long stack of synthetic images with a spatial resolution of 2.5 m, therefore upscaling the resolution of the Sentinel-2 imagery by four times. In order to improve the performance of our model, we intend to extend our training dataset in the future, the next step being implementation of a monitoring and risk prevention system based on automated change detection from synthetic imagery stacks.
Our project activities will rely on the Copernicus Earth Observation programme to support public authorities and private sectors involved in cultural heritage management by offering satellite-derived information in a timely and easily accessible manner. Although in an early stage, the work conducted so far demonstrates once again the operational and possible commercial potential of Earth Observation data in corroboration with AI techniques in becoming a viable solution that answers user-driven products and services that meet the day-to-day real needs arising in land management application sectors.
This work was supported by a grant of the Romanian Ministry of Education and Research, CCCDI – UEFISCDI, project number PN-III-P2-2.1-PTE-2019-0579, within PNCDI III (AIRFARE project).
Tropical Dry Forest Change Detection Using Sentinel Images and Deep Learning
Tropical dry forests (TDF) cover approximately 40% of the globally available tropical forest stock and play an essential role in controlling the interannual variability in the global carbon cycle, water cycle maintenance, reducing erosion and providing economic and societal benefits. Therefore, there is a strong need to persistently monitor changes in TDF to support sustainable land management and law enforcement activities to reduce illegal degradation. Satellite-based monitoring systems are the primary tools for providing information on newly deforested areas in vast and inaccessible forests. Recently, a temporally dense combination of optical and SAR images were used to counter the near constant cloud cover in tropical regions and increase early detection of deforestation events.
However, existing approaches and operational systems for satellite-based near real-time forest disturbance detection and monitoring such as the GLAD alerts (Hansen et al. 2016) and RADD alerts (Reiche et al. 2021) have mainly been used over tropical humid forests (THF) and their efficacy over TDF is largely undetermined because of the seasonal nature of TDF. Therefore, expanding the success of mapping capability from THF to TDF is of paramount importance. To this end, Combining optical and SAR datasets requires different methods for accurate inference as the observables are different due to the differences in image acquisition modality i.e. optical and SAR images observe different aspects of forest structures. In addition, utilizing optical and SAR images for TDF mapping requires robust seasonality mitigation to avoid false detections.
We will demonstrate a robust and accurate deep learning (DL) approach to map TDF changes from Sentinel-1 SAR and Sentinel-2 optical images. The designed DL approach utilizes a two-step weakly supervised learning framework. In the first step, it uses pixels where the Hansen annual forest change and GLAD alerts agree as initial reference of highly confident alerts. We then apply a hard positive mining strategy by searching for the earliest low confidence alerts at those same locations, which will be used to generate the labels to train our DL model. In the second step, the framework uses a Neural Network (NN) architecture with a self-attention mechanism to accurately infer TDF changes. This NN framework focuses on certain parts of the input sequences of images to allow for more flexible interactions between the different time steps in the image stack. The output from this framework will be compared with the output from standard recurrent neural networks such as the long-short term memory (LSTM) recurrent NN.
Hansen, M.C., Krylov, A., Tyukavina, A., Potapov, P.V., Turubanova, S., Zutta, B., Ifo, S., Margono, B., Stolle, F., Moore, R., 2016. Humid tropical forest disturbance alerts using Landsat data. Environ. Res. Lett. 11, 34008.
Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N-E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., Herold, M. (2021) Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters 16, 2, 024005. https://doi.org/10.1088/1748-9326/abd0a8.
Arctic regions are one of the most rapidly changing environments on earth. Especially Arctic coastlines are very sensitive to climate change. Coastal damages can affect communities and wildlife in those areas and increased erosion leads to higher engineering and relocation costs for coastal villages. In addition, erosion releases significant amounts of carbon, which can cause a feedback loop that accelerates climate change and coastal erosion even further. As such, a detailed examination of coastal ecosystems, including shoreline types and backshore land cover, is necessary.
High spatial resolution datasets are required in order to represent the various types of coastlines and to provide a baseline dataset of the coastline for future coastal erosion studies. Sentinel-2 data offer good spatial and temporal resolution and may enable the monitoring of large areas of the Arctic. However, some relevant classes have similar spectral characteristics. A combination with Sentinel-1 (C-band SAR) may improve the characterization of some flat coastal types where typical radar issues such as layover or shadow do not occur.
This study is comparing a Sentinel-1/2 based tundra landcover classification scheme, which is developed for full pan-Arctic application, with another landcover classification, created specifically for mapping Arctic coastal areas (Sentinel-2 only). Both approaches are based on machine learning using a Gradient Boosting Machine. The Arctic Coastal Classification is based on Sentinel-2 data and considers 12 bands with 5 target classes while the Sentinel-1/2 based tundra landcover classification scheme is based on five Sentinel-2 bands (temporally averaged) and Sentinel-1 data acquired at VV polarization and results in more than 20 classes.
Results show that even the best classification algorithms show limitations at specific coastal settings and sea water conditions. The analysis demonstrates (1) the need for a coastal specific classification in this context, (2) the need for date specific mapping, but consideration of several acquisitions to capture general coastal dynamics, (3) the potential of detailed Arctic landcover mapping schemes to derive subcategories and (4) the need to separate settlements and further infrastructure.
Improving the management of agricultural areas and crop production is strictly necessary in the advent of global population growth and the current climate emergency. Nowadays, several methodologies at different regional and continental scales exist for monitoring croplands and estimating yield. In all schemes, Earth observation (EO) satellite data offer massive, reliable and up-to-date information for monitoring crops and characterizing their status and health efficiently in near-real-time.
In this work, we explore and focus on the potential of neural networks (NN) for developing interpretable crop yield models. We ingest multi-source and multi-resolution time series of satellite and climatic data to develop the models. We focus on the interpretability in the case study of the larger area of the US Corn Belt. The study area is one of the leading agricultural productivity regions globally due to its massive production of cereals. Particularly, we have built models to estimate the yield of corn, soybean and wheat. According to previous studies, the synergy of variables from different sources has proven successful [1,2,3]. As input variables, we selected a variety of remote sensing and climatic products sensitive to crop, atmosphere, and soil conditions (e.g., enhanced vegetation index, temperature, or soil moisture). Neural networks provided excellent results in all crops (R>0.75) matching other standard regression methods like Gaussian processes and random forest.
Understanding neural networks is of utmost relevance, especially with overparameterized and neural networks. Interpreting what the models learned allows us to extract and discover new rules governing crop system dynamics, such as the influence of the input variables, rank agropractices, and study the impact of climate extremes (such as droughts and heatwaves) on production. And all these in a spatially explicit and temporally resolved manner. In addition, temporal data streams allow us to detect which temporal instant is more critical along the different phenological states of the crop regarding productivity terms. For this purpose, we explore several techniques to shed light on what the trained neural networks learned from EO and crop yield data such as methods to study the activation of different neurons on NN, and the associated with different time instants. These experiments open up new opportunities to understand crop systems and justify the necessary management decisions in order to enhance agricultural control in a changing climate.
[1] Mateo-Sanchis, A., Piles, M., Muñoz-Marí, J., Adsuara, J. E., Pérez-Suay, A., Camps-Valls, G. (2019). Synergistic integration of optical and microwave satellite data for crop yield estimation. Remote sensing of environment, 234, 111460.
[2] Martínez-Ferrer, L., Piles, M., Camps-Valls, G. (2020). Crop Yield Estimation and Interpretability With Gaussian Processes. IEEE Geoscience and Remote Sensing Letters.
[3] Mateo-Sanchis, A., Piles, M., Amorós-López, J., Muñoz-Marí, J., Adsuara, J. E., Moreno-Martínez, Á., Camps-Valls, G. (2021). Learning main drivers of crop progress and failure in Europe with interpretable machine learning. International Journal of Applied Earth Observation and Geoinformation, 104, 102574.
Predicting short- and long-term sea-level changes is a critical task with deep implications for both the safety and job-security of a large part of the world's population.
The satellite altimetry data record is now nearly 30 years old, and we may begin to consider employing it in a deep learning (DL)---and, by definition, data-hungry---context, a somewhat unexplored territory until now.
Even though Global Mean Sea Level (GMSL) largely changes linearly with time (3 mm/year), this global average exhibits large geographical variations and covers a suite of regional non-linear signals, changing in both space and time.
Because DL can capture the non-linearity of the system, it offers an intriguing promise.
Furthermore, improving the mapping and understanding of these regional signals will enhance our ability to project sea level changes into the future.
The use of machine learning techniques in altimetry settings has been hampered previously, due to the lack of data, while explainability of DL models has been an issue, as has the computing requirements.
In addition, machine learning models do not generally output uncertainties in their predictions.
Today, though, datasets have approached a suitable size, model explainability is solved by permutation importance and SHAP values, computing is cheap and it is possible to include information on uncertainties as well.
These can be handled by either appropriate loss functions, ensemble techniques or Bayesian methods, which means the time has come to employ 30 years of satellite altimetry data to improve our predictive power in sea-level changes.
The types of dataset will vary according to the problem area: for climate and long term changes, averaged monthly low resolution records will be adequate.
However, for studies of extreme events, like flooding, we need daily or better averages on as high a spatial resolution as possible.
This will increase the amount of data many-fold.
This project will focus on the above problems in both global and regional settings, and we will try to model some extremel sea level events causing flooding in the past.
The presentation will highlight our vision for 1) what is the best way to structure the data and make it available for other teams pursuing DL applications, 2) how do we constantly incorporate new data into the model to prevent data drift, 3) what is the best way to ensure predictions contain uncertainties and 4) how do we make the model available for consumption using cloud technologies?
The global availability of Sentinel-2 images makes mapping tree species distribution over large areas easier than ever before, which can be very beneficial for better management of forest resources. Research on methodology on how to derive tree species classification from the Sentinel-2 data is very advanced, including tests and comparison of various Machine Learning (ML) algorithms (Grabska et al., 2019, Immitze et al., 2019, Lim et al., 2020, Person et al., 2018, Thanh and Kappa, 2018, Wessel et al., 2018). On the other hand, implementation of this knowledge into an operational service delivering products to the end users such as forest managers and forestry consultant companies remains a major challenge. Through this presentation we aim to share our experience with turning ML modelling into operational service dedicated to tree species classification.
NextLand is an alliance of Earth Observation (EO) stakeholders, which collaborate to offer the cutting-edge of EO technology by co-designing 15 agriculture and forestry commercial services. NextLand Forest Classification service targets an ambitious goal to combine ML expertise with geoscience knowledge and cloud service know-how to provide end-to-end solutions to our users. To achieve this objective, several key issues have to be addressed, including algorithm selection, modular pipeline development, close cooperation with users for service fine-tuning, and service integration into visible marketplace.
A process of ML algorithm selection has been already presented in (Łoś et al., 2021). We compared performance of XGBoos and Light Gradient Boosting Machine (LGBM) with widely used in remote sensing Random Forest, Support Vector Machine and K-Nearest Neighbour algorithms by classifying 8 classes of tree species over a 40 000 km2 area in central Portugal. LGBM was chosen as the most optimal for our needs taking into account efficacy – measured through F1-score and accuracy - and efficiency – measured through processing time.
Processing pipeline for NextLand Forest Classification Service is built from modules, which makes adaptations and development very convenient. Individual modules contribute to the bigger tasks such as image pre-processing or data preparation. As we cooperate with users expressing various requirements, the flexibility of pipeline adaptation through selection of the relevant modules is crucial. For example, a user can choose a product generated based on an in-house model owned by the service provider, or can provide their reference data to develop a new model. In the first case the procedure is to run a pipeline in a classification mode. In the second, pipeline runs first in a training mode and then in the classification one. Users can provide reference data as points or as polygons. As a consequence, a module dedicated to reference data reading must be able to handle both these types. When a new model is developed, a user can choose if the final product representing tree species distribution is generated from the same Sentinel-2 data that were used for the model development, or from Sentinel-2 data representing another year, e.g., the most recent. It was found out that often users own archival forest inventories data, which are used for the model development, but the users are interested in tree species distribution from recent years. This requirement is handled by a module dedicated to satellite data download. Some users prefer products provided as geotiff, while others prefer shapefile. As default, the developed pipeline provides tree species classification stored as geotiff, and when requested a module converting raster to vector is included in the pipeline. Examples described above, confirm the importance of a modular approach in development of an operational EO-based service.
As the service is developed for users, close cooperation with them is crucial in development of a successful application. We target users with expertise in forest management, which does not necessarily include ML and EO knowledge. A user has to be informed about requirements of ML approaches, especially that a model can be only as good as training data are. Forest inventory data are an excellent input for ML models as they present high accuracy. Moreover, as these data are collected by forest owners for various applications, using them in EO-based services does not generate additional costs of data acquisition. However, in practice, forest inventory data are rarely shared for confidentiality, privacy and other reasons (i.e., economic value of data). Apart from Finland, to our best knowledge, none of the European Union countries provides open access to the national forest inventory. Limited access to high-quality training data is one of the main limiting factors of ML EO-based application for forestry. It can be mitigated by e.g., signing agreement on usage of data provided by a user. We learnt that close cooperation with a user is also important at product evaluation stage. Limitation of EO-based services, e.g., regarding spatial resolution, should be clearly stated before product delivery.
Convenient access to a service is another key element of a successful EO-based application. NextLand Forest Classification service is integrated into Store4EO, which is Deimos EO Exploitation Platform solution. This platform holds service development, integration, deployment, delivery and operation activities. Its design and deployment is driven by the need to come up with services that are easily tailored to the real operational conditions, accepted by the users, and become a constituent element of the users’ business as-usual working scheme.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 776280.
References:
Grabska, E., Hostert, P., Pflugmacher, D., & Ostapowicz, K. (2019). Forest stand species mapping using the Sentinel-2 time series. Remote Sensing, 11(10), 1197.
Immitzer, M., Neuwirth, M., Böck, S., Brenner, H., Vuolo, F., & Atzberger, C. (2019). Optimal input features for tree species classification in Central Europe based on multi-temporal Sentinel-2 data. Remote Sensing, 11(22), 2599.
Lim, J., Kim, K. M., Kim, E. H., & Jin, R. (2020). Machine Learning for Tree Species Classification Using Sentinel-2 Spectral Information, Crown Texture, and Environmental Variables. Remote Sensing, 12(12), 2049.
Łoś, H., Mendes, G. S., Cordeiro, D., Grosso, N., Costa, H., Benevides, P., & Caetano, M. (2021). Evaluation of Xgboost and Lgbm Performance in Tree Species Classification with Sentinel-2 Data. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 5803-5806). IEEE.
Persson, M., Lindberg, E., & Reese, H. (2018). Tree species classification with multi-temporal Sentinel-2 data. Remote Sensing, 10(11), 1794.
Thanh Noi, P., & Kappas, M. (2018). Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors, 18(1), 18.
Wessel, M., Brandmeier, M., & Tiede, D. (2018). Evaluation of different machine learning algorithms for scalable classification of tree types and tree species based on Sentinel-2 data. Remote Sensing, 10(9), 1419.
Given the increasing demand for food with a growing population together with a changing Earth, there is a need for more agricultural resources along with up-to-date cropland monitoring. Optical Earth observation is able to generate valuable data to estimate the quality of vegetation traits that directly affect agricultural resources and the quality of vegetation [Verrelst2015].
In the data era an unprecedented inflow of information is acquired from different satellite missions such as the Sentinel constellations, and exponentially more data is expected given the upcoming Sentinels such as the imaging spectroscopy mission CHIME. This valuable data stream can be used to obtain spatiotemporal-explicit quantification of a suite of vegetation traits across the globe.
Despite the plethora of satellite data freely available to the community, when it comes to developing and validating vegetation retrieval models, however, the most valuable information is the ground truth of the observations. This is a challenging problem as it requires human-assisted tasks of annotation involving campaigns with high monetary costs.
Due to the impossibility of collecting ground truth for the whole Earth at any time, one feasible alternative is to use prior knowledge about the Earth system in order to generate physically-plausible data.
As an alternative of in situ observations, spectral observations of surfaces can also be approximated with radiative transfer models (RTMs). RTMs are physically models built to generate pairs of spectra and variable, they are of crucial importance in the optical remote sensing due to its capability of surface-radiation interactions modelling.
In this work we propose the use of RTM simulations and large-scale machine learning (ML) algorithms in order to develop hybrid models of vegetation traits such as chlorophyll (Chl) both at leaf and canopy levels, and leaf area index (LAI). The ML kernel ridge regression algorithm (KRR) has been proven to be an effective algorithm to make inference about variables, but its limitation is on the amount of data used to build that model, as it involves a cubical asymptotic order. With the ambition to alleviate the KRR complexity burden, we compare the use of the large-scale techniques random Fourier features (RFF) [Rahimi2008], orthogonal random features (ORF) [Yu2016] and with the Nyström method [Williams2001].
We focus on the retrieval task of the above-mentioned biophysical variables by building hybrid models through RTM SCOPE generated training data. Several experiments were designed, regarding to the large-scale methods a study of both error and execution time with regard to the rank of that methods. The predictive behaviour of the proposed versions is as good as the original KRR by decreasing their execution time [PerezSuay2017]. In particular, when estimating canopy chlorophyll content, values of root mean squared error (RMSE) closer to 0.45 have been achieved with Nyström method, this value is relatively closer compared with the 0.4 achieved by KRR. In the case of the LAI parameter, a value of 0.8 is achieved in RMSE terms by Nyström which remains closer to the 0.77 of the KRR (being the lower one). Regarding the computational execution time, all the proposed methods are alleviating the execution time by almost one order of magnitude in the current configuration, where selected rank is 300 representing the 10% of the data sample used to build the model. Furthermore, all models were validated against in-situ data, achieving promising results in accuracy terms. Also, we have evaluated the validity of the models by making inferences when using CHIME-like acquired scenes originating from PRISMA data. The obtained results are promising in error terms, and provide a pathway to build more generic models by using a bigger amount of available training data, and so reaching globally-applicable models, e.g. in the context of the upcoming CHIME mission.
References
[PerezSuay2017] A. Pérez-Suay, J. Amorós-López, L. Gómez-Chova, V. Laparra, J. Muñoz-Marí, and G. Camps-Valls. "Randomized kernels for large scale earth observation applications". Remote Sensing of Environment, 202:54--63, 2017.
[Rahimi2008] A. Rahimi and B. Recht. "Random features for large-scale kernel machines". Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2008.
[Verrelst2015] J. Verrelst, G. Camps-Valls, J. Muñoz-Marí, J. P. Rivera, F. Veroustraete, J. G. Clevers, and J. Moreno. "Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties – a review". ISPRS Journal of Photogrammetry and Remote Sensing, 108:273--290, 2015.
[Williams2001] C. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. Advances in
Neural Information Processing Systems}, volume 13. MIT Press, 2001.
[Yu2016] F. X. X. Yu, A. T. Suresh, K. M. Choromanski, D. N. Holtmann-Rice, and S. Kumar. "Orthogonal random features". Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
Agricultural field masks or boundaries provide a basis for obtaining object-based agroinfomatics like crop type, crop yield, and crop water usage. Machine learning techniques offer an effective means of masking fields or delineating field boundaries using satellite data. Unfortunately, field boundary information can be difficult to obtain when trying to collect ground truth to train a machine learning model, since such information is not routinely available for many regions around the world. Manually creating field masks is an obvious solution to address this data gap, but this can consume a considerable amount of time, or simply be impractical when confronted with large mapping tasks (e.g. national scale). Here, we propose a hybrid machine learning framework that combines clustering algorithms and convolutional neural networks to identify and delineate center-pivot agricultural fields. Using a multi-temporal sequence of Landsat-based normalized vegetation index collected over one of the major agricultural regions in Saudi Arabia as input, a training dataset was produced by identifying field shape (circle, fan, or neither) and establishing whether it consisted of multiple fields. When evaluated against 4,099 manually identified center-pivot fields, the framework showed high accuracy in terms of identifying the fields, achieving 97.4% producer and 98.0% user accuracies on an object basis. The intersection over union accuracy was 96.5%. Based on the framework, the field dynamics across the study region from 1988 to 2020 were obtained, including the number and acreage of fields, the spatial and temporal dynamics of field expansion and contraction, and the number of years a field was detected as active. Our work presents the first long-term assessment of such dynamics in Saudi Arabia, and the resulting agroinformatic data correlated well with government-driven policy initiatives to reduce water consumption. Overall, the framework was trained using a dataset that was easy and efficient to produce and relied on limited in-situ records. It demonstrated stable performance when applied to different periods, and has the potential to be applied at the national scale, providing agroinformatic data that may assist food and water security-related concerns.
Soil moisture (SM) is a pivotal component of the Earth system, affecting interactions between the land and the atmosphere. Numerous applications, such as water resource management, drought monitoring, rainfall-runoff modelling and landslide forecasting, would benefit from spatially and temporally detailed information on soil moisture. The ESA CCI provides long-term records of SM, globally, and with daily temporal resolution. However, its coarse spatial resolution (0.25°) limits its use in many of the above-mentioned applications.
The aim of this work is to downscale the ESA CCI SM product to 0.05° using machine learning and a set of static and dynamic variables affecting the spatial organization of SM at this scale. In particular, we employ land cover information from the Copernicus Global Land Service (CGLS) together with land surface temperature and reference evapotranspiration from the EUMETSAT Prototype Drought & Vegetation Data Cube (D&V DC). The latter facilitates the access to numerous satellite-derived environmental variables and provides them on a regular grid.
Preliminary results against in-situ measurements across Europe obtained from the International Soil Moisture Network (ISMN) show that the downscaled SM preserves the high temporal accuracy of the ESA CCI SM while simultaneously increasing the spatial level of detail. Furthermore, spatial correlations against large in-situ networks (> 20 stations) suggest that the downscaled SM provides a better description of the spatial distribution of SM compared to the original ESA CCI product. We will also highlight the strengths of the proposed approach compared to other downscaled SM products and discuss some limitations and possible improvements.
Terrain-AI (T-AI) is a collaborative research project which is focussed on improving our knowledge and understanding of land use activity - as this relates to climate change. To optimise sustainable land use, it is essential that we develop tools and information services that can inform more effective and sustainable management practices. The objective of this research is to integrate a national network of benchmark sites and a digital data platform capable of integrating, analysing and visualising large volumes of Earth observation data-streams, including data from satellites, drones and on-site measurements and integrating these datasets into appropriate modelling approaches to simulate greenhouse gas fluxes, sources and sinks. The overall aim of T-AI is to increase our understanding of how management practices can influence carbon emissions arising from the landscape. As part of T-AI, we are utilising a range of model-based approaches, including empirical and dynamical models, to generate estimates of the energy, water and CO2 fluxes over croplands. While the majority of agricultural land in Ireland is given over to grass based farming, tillage farming is practiced along the east and south coasts, due to the suitability of soils and climate, with the dominant crop types of winter wheat and spring barley, which are the focus for this study.
Building on the SAFY-CO2 model framework proposed by Pique et.al., we employ a light use efficiency based modelling approach with modules for soil water balance and carbon fluxes. Observations from multi-modal remote sensing data including multi- and hyper- spectral UAV, LIDAR, Sentinel-1 and Sentinel-2 are ingested into the model in the sequential based data assimilation framework. ERA-5 Land Reanalysis data was processed for use as weather inputs. The model is subsequently evaluated at a selection of benchmark sites using eddy covariance flux tower data.
The Ensemble Kalman Filter (EnKF) method has been shown to be particularly suitable for the assimilation of remotely sensed data into crop models and has been extensively assessed for this purpose. The efficiency of EnKF has proved to be affected by a range of factors, such as the number of observations ingested into the process. Numerous studies have shown that using a higher number of observations can result in improved accuracy of estimation. Other factors such as errors and uncertainties in the remote sensing observations, the variables that are retrieved as well as crop model formulation errors and parameters uncertainties also play an important role.
Ensemble Kalman Filter was thus applied to extend the SAFY-CO2 model framework using observations from the multispectral and hyperspectral UAV, Sentinel-1 & Sentinel-2 for the benchmark sites. In general, improvements on the simulated data could be observed. Due to the cloud cover all over the year, limited remote sensing data was available and that may have hindered the performance of the assimilation, the results of the data processing needs to be further investigated with other sites.
Asia is the world's largest regional aquaculture producer, accounting for 88 percent (75 million tons) of the total global production, and has been the main driver of global aquaculture growth in recent years. The five largest aquaculture producing countries all come from Asia: China, India, Indonesia, Vietnam and Bangladesh. The farming of fish, shrimp, and mollusks in land-based pond aquaculture systems contributed most to Asia's dominant role in the global aquaculture sector, serving as the primary source of protein for millions of people. Aquaculture expanded rapidly since the 1990s in low-lying areas with flat topography along the coasts of Asia, particularly in Southeast Asia and East Asia. As a result of the rapid global growth of aquaculture in recent years, the mapping and monitoring of aquaculture are a focus in coastal research and plays an important role in global food security and the achievement of the UN Sustainable Development Goals.
We present a novel continental scale mapping approach that uses multi-sensor Earth observation time series data to extract pond aquaculture within the entire Asian coastal zone, defined as the onshore area up to 200km from the coastline. With free and open access to the rapidly growing volume of high-resolution C-band SAR and multispectral satellite data from the Copernicus Sentinel missions as well as machine learning algorithms and cloud computing services, we automatically detected and extracted pond aquaculture on a single pond unit level. For this purpose, we processed more than 25,000 Sentinel-1 dual-polarized GRDH images, generated a temporal median image and applied image segmentation using histogram-based thresholding. The derived object-based pond units were enriched with multispectral time series information derived from Sentinel-2 L2A data, topographical terrain information, geometric features and Open Street Map data in order to detect coastal pond aquaculture and separate them from other natural or artificial water bodies. In total, we mapped more than 3.4 million aquaculture ponds with a total area of 2 million ha with a mean average overall accuracy of 0.91 and carried out spatial and statistical data analyses in order to investigate the spatial distribution and to identify production hotspots in various administrative units at regional, national, and sub-national scale.
The application of earth observation (EO) data sets and artificial intelligence was explored to develop EO-based monitoring of the algal blooms. Opportunistic macroalgal blooms have been an essential factor in determining the ecological status of coastal and estuarine areas in Ireland and across the world. A novel approach to map green algal cover using a Normalised Difference Vegetation Index (NDVI) was developed using EO data sets. Scenes from Sentinel-2A/B, Landsat-5 and Landsat-8 missions were processed for eight different estuarine areas of moderate, poor, and bad ecological status using European Union Water Framework Directive classification for transitional water bodies. Images acquired during low-tide conditions from 2010 to 2018 within 18 days of field surveys were considered for the investigation. The estimates of percentage coverage obtained from different EO data sources and field surveys were significantly correlated (R2= 0.94) with Cohen’s kappa coefficient of 0.69 ± 0.13. The results demonstrated that the NDVI-based methodology could be successfully applied to map the coverage of the blooms and to monitor estuarine areas in conjunction with other monitoring activities that involve field sampling and surveys. The combination of wide-spread cloud-coverage and high-tide conditions posed additional constraints during the selection of the images. Considering these limitations, the findings showed that both Sentinel-2 and Landsat scenes could be used to estimate bloom coverage. Moreover, Landsat, because of its legacy program since the 1970s, can be utilized to reconstruct the blooms using historical archival data. Considering the importance of biomass for understanding the severity of algal accumulations, an Artificial Neural Network (ANN) model was trained using the in situ historical biomass samples and the combination of radar backscatter (Sentinel-1) and optical reflectance in the visible and near-infrared regions (Sentinel-2) to predict the biomass quantity. The ANN model based on multispectral imagery was suitable to estimate biomass quantity (R2=0.74). The model performance could be improved with the addition of more training samples over time. The developed methodology can be applied in other areas experiencing macroalgal blooms in a simple, cost-effective, and efficient way. Similarly, the technology can be replicated for other species of algae. The study has demonstrated that both the NDVI-based technique to map spatial coverage of macroalgal blooms and the ANN-based model to compute biomass have the potential to become an effective complementary tool for monitoring macroalgal blooms where the existing monitoring efforts can leverage the benefits of earth observation datasets.
The analysis of Sentinel-2 time series has already proven invaluable for mapping and monitoring the land cover of Europe [1, 2] and has great potential to contribute to monitoring forests in the tropics [e.g. 3, 4]. The implementation of an operational processing system for Sentinel-2 based forest monitoring is subject to several challenges including the need for an accurate analytical framework that is both robust against phenological shifts and cloud cover and scalable in terms of computation and I/O enabling continental wide mapping within an adequate time frame.
The usage of deep learning methods for operational EO applications is becoming more and more popular in recent years. This comprises, for example, the extraction of building footprints with semantic segmentation on VHR images [5], delineation of agricultural field boundaries [6] or land cover mapping with convolutional neural networks in the time domain [2].
While sequential deep learning models such as Recurrent Neural Networks (RNN) are in principle very well suited for the analysis of satellite image time series of arbitrary and varying length, they tend to under- or overfit the training data, which often degrades their performance for real world applications. Despite modifications to RNNs (e.g. long short-term memory – LSTM, Gate recurrent Units – GRU) designed to address such issues, the usage of RNNs for Sentinel-2 time series classification and land cover mapping on the continental or global scale are yet to be operationalized.
Inspired by recent advances in the design of RNNs for the analysis of satellite time series [7] our study explores how multi-layer RNN architectures can be used to classify raw Sentinel-2 time series at high accuracies, while taking certain measures to keep it computationally efficient and suitable for large-scale operational use. We identify three main contributors to overall processing time: loading of images, pre-processing steps (e.g. temporal resampling, which is a commonly applied to satellite image time series for land cover classification) and the actual inference of the land cover class. It is worth noting that – when compared to the pixel-wise inference of time series on a continental scale (i.e. billions of pixels) – model training and hyperparameter optimization is not necessarily a computational bottleneck because we consider rather lightweight RNN architectures.
In our study, we completely skip the pre-processing of the images by making predictions directly on raw Sentinel-2 Level-2A time series. Inference times of RNNs correlate with the length of the time series (i.e. number of satellite images), so considering less satellite images contributes to both decreased inference and download times. We therefore employ scene-filtering methods that automatically select suitable images at the level of sub-units (~20 km) of S-2 granules. The scene filtering method employed strikes a balance between the desire to achieve good coverage for each sub-unit with a suitable number of less clouded images and the need to keep the overall number of Sentinel-2 scenes at a reasonable level (with implications on download and inference time).
The above-mentioned techniques constitute a lightweight processing chain with drastically reduced I/O (when compared to methods where all or most of the available images are loaded from S3 storage) and computation (when compared to approaches where pre-processing steps are employed). We demonstrate that thematic accuracies achieved are comparable to methods that are much greedier in terms of number of images being used and pre-processing steps being applied. The processing chain used in the CLC+ Backbone project to derive a land cover map over Europe with 11 land cover classes [2] serves as a reference (the CLC+ classification processing chain includes loading of all Sentinel-2 bands up to a cloud cover of 80% and a temporal resampling as a pre-processing step before the prediction of the map).
We demonstrate the above method using reference samples largely based off the LUCAS 2018 survey, extended by additional samples acquired during the CLC+ Backbone project. The classes considered for this study are: coniferous trees, deciduous trees, and the background class (i.e. no trees).
[1] https://land.copernicus.eu/pan-european/high-resolution-layers
[2] Probeck, M., Ruiz, I., Ramminger, G., Fourie, C., Maier, P., Ickerott, M., ... & Dufourmont, H. (2021). CLC+ Backbone: Set the Scene in Copernicus for the Coming Decade. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 2076-2079). IEEE.
[3] Nazarova, T., Martin, P., & Giuliani, G. (2020). Monitoring vegetation change in the presence of high cloud cover with Sentinel-2 in a lowland tropical forest region in Brazil. Remote Sensing, 12(11), 1829.
[4] Chen, N., Tsendbazar, N. E., Hamunyela, E., Verbesselt, J., & Herold, M. (2021). Sub-annual tropical forest disturbance monitoring using harmonized Landsat and Sentinel-2 data. International Journal of Applied Earth Observation and Geoinformation, 102, 102386.
[5] Sirko, W., Kashubin, S., Ritter, M., Annkah, A., Bouchareb, Y. S. E., Dauphin, Y., ... & Quinn, J. (2021). Continental-Scale Building Detection from High Resolution Satellite Imagery. arXiv preprint arXiv:2107.12283.
[6] https://blog.onesoil.ai/en/how-onesoil-uses-data-science
[7] Turkoglu, M. O., D'Aronco, S., Wegner, J., & Schindler, K. (2021). Gating Revisited: Deep Multi-layer RNNs That Can Be Trained. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Crop monitoring at field level depends upon the availability of consistent field boundaries. In Europe, each country has its own Land Parcel Information System (LPIS) used as reference parcels to calculate the maximum area eligible in the direct payment of the Common Agricultural Policy. The update of the parcels by the administration is time-consuming and often based on orthophotos not always up to date. An automated field delineation would greatly ease this process by detecting the new parcels and the changes of parcel boundaries from one season to another. On another hand, this delineation would allow the extraction of statistical features at field level without the need for manual intervention. This objective was successfully achieved by using ResUNet-a, a deep Convolutional Neural Network, on Sentinel-1 metrics based on the coherence time series at 10 meters spatial resolution. The use of Synthetic Aperture Radar (SAR) allows obtaining early in the season cloud-free composites with high contrast between different fields. ResUNet-a is a fully connected UNet that performs a multitask semantic segmentation by estimating three metrics for each pixel: the extent probability, (i.e., the probability of a pixel to belong to a field), the probability to be a boundary pixel and the distance to the closest boundary. This model is trained here on the LPIS of the year 2019 in Wallonia (Belgium) and applied on the year 2020. A watershed algorithm is then used on the three metrics to extract the predicted field polygons. The results validation compares these predictions to the LPIS of 2020 on one hand and to the LPIS of 2019 on the other hand to validate the detected changes. These results assessment obtained over more than 60 000 parcels demonstrates that the proposed method has very good accuracy for field delineation paving the way for in-season field delineation independent to manual inputs. On top of that, the method can detect the new parcels, the ones that are no longer exploited and the ones that have changed compared to the last season. While such a delineation was found really critical for near real time crop monitoring at field level, the approach is also very promising in the context of the LPIS management for the Common Agriculture Policy to point out which fields need to be updated or added.
The human eye can approximately estimate the distance to objects that are relatively closer or further away on landscape photos. Advances in image analysis such as semantic or instance segmentation allow computers to identify objects on photos or videos in near real time. This capacity is also revolutionizing in-situ data collection for Earth Observation – potentially turning already existing geo-tagged photos into sources of in-situ data. The automatic estimation of the distance between the point of observation and the identified objects is the first step toward their localization. Moreover, approximate distance estimation can be used to determine fundamental landscape properties including openness. In this respect, a landscape is open if it is not surrounded by nearby objects which occlude the view.
In this work, we show how variations in the skyline on landscape photos can be used to approximate the distance to trees on the horizon. This is done by detecting the objects forming the skyline and analysing the skyline signal itself. The skyline is defined as the boundary between sky and non-sky (ground objects) of an image. The skyline signal is the height (y coordinate in the image) of the skyline expressed as a function of the image horizontal coordinate (x component).
In this study, we use 150 landscape photos collected during the 2018 Land Use/Cover Area frame Survey (LUCAS) campaign. In a first step, the landscape photos are semantically segmented with DeepLab-V3, trained with the Common Object in Context (COCO) dataset to provide pixel-level classification of the objects forming the image. In a second step, a Conditional Random Fields (CRF) algorithm was applied to increase the details of the segmentation and to extract the skyline signal. The CRF algorithm improves the skyline resolution increasing, on average, the skyline length by a factor of two. This is an important result, which provides improved performance when estimating tree distances. For each photo, the skyline is described by the skyline signal, ysky[x], and by the associated object classes, ck[x]. In particular, objects forming the skyline are identified and associated to different classes. Signal ck[x] returns the class to which pixel (x, ysky[n]) belongs. Different objects, such as trees, houses and buildings, have different geometrical properties and need to be analyzed separately. For this reason, object classification is a crucial step in the methodology developed in this work.
The main idea developed and exploited in this work is that distant objects show lower variations in the corresponding skyline signal. For instance, a close tree is characterized by an irregular profile which is rich of details. When a tree forms the skyline, the corresponding skyline signal is affected by significant and fast variations. As the distance between the point of observation and the tree increases, details are lost and the skyline signal becomes smoother with less details and variations. This principle has been developed by considering different metrics to quantify signal variations and investigating potential relationships between object distance and variation metrics.
Variation metrics have been computed considering first order differences of the skyline signals. First order differences, which correspond to a numerical derivative, remove offsets in the skyline signal and operates as a high-pass filter which enhances high frequency signal variations. After computing first order differences, three metrics were evaluated: the normalized segment length, the sample variance, and the absolute deviation. Each metric has been computed considering skyline segments belonging to the same object class, as identified by the signal ck[x]. In addition, the effect of windowing has been considered. Windowing has been used to limit the length of the segment used for the metric computation and has been introduced to mitigate the effect of different objects belonging to the same class. Consider, for instance, the case where a line of trees is present in the skyline. This line of trees can be slanted, and trees could be at different distances. Since all the trees belong to the same object class, the corresponding skyline segment will be used for the metric computation. With windowing, only a portion of the skyline segment is used, reducing the impact of objects at different distances.
The variation metrics have been evaluated against 475 reference distances carefully measured on orthophotos for the objects belonging to the ‘trees’, ‘houses’, ‘other plants’ and ‘other buildings’ classes. As hypothesized, due do their fractal shape, the metrics based on skyline variations scale with distance for the tree and other plants classes but they do not show a clear relationship for the buildings and houses classes which are characterized by flat skyline profiles. Linear regression has been performed between the different metrics and the reference distances expressed in a logarithmic scale. For trees, the best performing windowed metric achieved an R2 of 0.47. This implies that 47% of the changes observed in the variation metrics is explained though a linear relationship with the log of distances. The metric performs from a couple of meters to over 1000 meters, effectively determining the distance order of magnitude. This is an encouraging result, which shows the potential of skyline variation metrics for the estimation of the distance between trees and observation points.
The distance metrics analyzed in this work can be useful to quantify the evolution and perceptions of landscape openness, to guide simultaneous object location on oblique (e.g. street level) and ortho-imagery, and to gather in-situ data for Earth Observation.
Airbus Intelligence UK, in partnership with agrifood data marketplace Agrimetrics, has developed FieldFinder, a computer vision analytics service that uses state of the art artificial intelligence to automatically delineate agricultural fields visible in optical satellite images. Using high-resolution imagery, growers, agribusinesses, retailers and institutions can be quickly and cost effectively provided with up-to-date field boundaries at any geographic scale. Here we explore how FieldFinder uses deep learning instance segmentation to extract field polygons from images captured by Airbus’ SPOT, Vision 1 and Pléiades satellites on demand.
Traditional field boundary capture methods, such as ground surveying or digitisation using aerial photography, can be exceptionally time consuming and therefore expensive to perform. FieldFinder produces agricultural field polygons quickly and remotely using cloud computing resources, removing the inefficiencies associated with manual field boundary data capture.
Furthermore, scaling up some traditional methods over particularly large areas can be a prohibitively expensive and elongated exercise. FieldFinder provides consistent, good quality field boundaries at any spatial scale with the same high level of accuracy throughout. FieldFinder delineates boundaries using high resolution satellite imagery, providing a reliable source of information, depicting even very small agricultural fields.
At the current stage in the development of FieldFinder, several geographically specific algorithms have been trained, including those for Western Europe, Iowa (also applicable to many other parts of the USA) and Kenya (also applicable to other regions with prevalent small holder agriculture). Although the ultimate goal is to develop a single algorithm that can be deployed anywhere in the world, it is important to approach this methodically, training and validating algorithms by territory, as there can be considerable observable differences in agricultural style between territories. The current algorithms have been developed by curating spatially and temporally varied ground truth datasets from a wide selection of high resolution satellite images, ensuring a high level of accuracy and accounting for different geographic regions that demonstrate distinct features.
A number of different sources of variation are represented in the training data, including different stages in the growing season, all possible land cover types and a wide range of observable features (including non agricultural features, which must be seen by a training algorithm to reduce false detections). Data augmentation was used to further expand the available training data, incorporating possible random variation. Such data curation efforts ensured the production of good quality training data, maximising the performance of any algorithm trained, however this is also a continuous process that develops as FieldFinder is used, constantly improving the training data and therefore the algorithms.
Not only is FieldFinder always improving in terms of its performance and geographic scope, but its capabilities are also constantly being evolved, and these evolutions will also be presented. Recent work has focused on performing automatic agricultural field change detection, highlighting only those fields that have undergone observable boundary changes from one image epoch to the next. This is extremely valuable for organisations tasked with maintaining regularly updated agricultural field databases, as such a tool can significantly reduce the time and therefore cost required to update these databases. There is also ongoing research into transitioning to self supervised learning, which is a highly cutting edge paradigm for training neural networks with small amounts of training data. Data availability is often the primary blocker for the creation of Earth observation analytical algorithms, so this will not only accelerate the rollout of FieldFinder to new territories and use cases, but will benefit future algorithm development.
The computer vision and deep learning techniques employed to develop FieldFinder are evolving at a sometimes startling pace, constantly giving rise to new technologies and therefore possibilities. These techniques are powerful, can provide solutions to numerous challenges and are applicable to almost every industry that makes use of Earth observation data. Similar algorithms can be developed for the detection, classification and tracking of any kind of object of interest, to provide advanced automatic mapping capabilities, site monitoring and alerting, or even for prediction and forecasting. Airbus continues to develop these technologies, constantly furthering and enhancing the actionable intelligence that can be extracted from high resolution satellite imagery.
Active fire detection for environmental monitoring is a very important task that can significantly be supported by satellite image analysis. Active fires need to be detected not only for fire fighting in settled areas, but also for finding fires in the wilderness, which is only possible from satellites global coverage.
Classically, active fire detection is based on multispectral signatures of fire on a per-pixel basis, sometimes including statistics of the surroundings. Such classical methods are fast, easy to apply and surprisingly powerful both in detecting and dissecting active fires. Following related work from Pereira [1], our work is based on fire detection algorithms from Schroeder [2], Kumar-Roy [3], and Murphy [4] combined with methodological inspiration form modern deep learning.
Recent work on fire detection has been given in [5]. The authors use fire perimeter data from the California Fire Perimeter Dataset (CALFIRE ) to create a multi-satellite collection of training data for fire segmentation. While using all satellites is an extremely interesting aspect of this work, the training data generation process is tailored to known fires in a small region of the world only and cannot safely distinguish active fires from burnt areas.
Pereira et al. use a completely orthogonal approach on a global scale [1]. They apply three different, simple, explainable, and well-known active fire detection methods on Landsat multispectral images to derive global active fire detection training data and train some basic U-Net models on this data successfully. In contrast to the first paper, however, they rely on a single satellite system.
Both papers are excellent contributions to the problem of fire detection from Earth observation data. A combination of their methodology, however, combined with a more advanced data management and analysis pipeline is promising.
In this project, we work towards closing the gap by using the Landsat data together with the given deterministic fire detection methods and fit minimalistic deep neural networks to reproduce the exact same of multispectral detections on Sentinel-2 data. Thereby, the traditional active fire detection models designed for Landsat instruments are safely transformed to input data from ESA mission Sentinel 2.
Based on this, we extend the work to integrate SAR data from Sentinel 1 and various methodologies of data preparation and fusion. For example, we apply a data preparation scheme based on a genetic algorithm for finding good representations of the whole multispectral information for this task [6] and we apply an automated model fusion technique we previously applied to building instance classification with success [7].
The outcome of this project is a methodology to derive global active fire datasets, which might suffer from errors of the underlying deterministic methodology and the transformation process, but which allow for global fire monitoring, which is of high interest in the context of climate and deforestation analysis together with baseline models both from simple data mining and deep learning regimes.
In the poster, we want to present our early results giving hints on the baseline performance of all steps, which we are going to improve during the course of this master thesis research project.
References
[1] G. H. Almeida Pereira, A. M. Fusioka, B. T. Nassu and R. Minetto, "Active fire detection in Landsat-8 imagery: A large-scale dataset and a deep-learning study," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 178, p. 171–186, 2021.
[2] W. Schroeder, P. Oliva, L. Giglio, B. Quayle, E. Lorenz and F. Morelli, "Active fire detection using Landsat-8/OLI data," Remote Sensing of Environment, vol. 185, p. 210–220, 2016.
[3] S. S. Kumar and D. P. Roy, "Global operational land imager Landsat-8 reflectance-based active fire detection algorithm," International Journal of Digital Earth, vol. 11, no. 2, p. 154–178, 2018.
[4] S. W. Murphy, C. R. Souza Filho, R. Wright, G. Sabatino and R. Correa Pabon, "HOTMAP: Global hot target detection at moderate spatial resolution," Remote Sensing of Environment, vol. 177, p. 78–88, 2016.
[5] D. Rashkovetsky, F. Mauracher, M. Langer and M. Schmitt, "Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, p. 7001–7016, 2021.
[6] G. Dax, M. Laass and M. Werner, "Genetic Algorithm for Improved Transfer Learning Through Bagging Color-Adjusted Models," 2021, p. 2612–2615.
[7] E. J. Hoffmann, Y. Wang, M. Werner, J. Kang and X. X. Zhu, "Model Fusion for Building Type Classification from Aerial and Street View Images," Remote Sensing, vol. 11, no. 11, 2019.
Plant vigor assessment is an important issue in modern precision agriculture. The availability of Unmanned Arial Vehicles (UAV) and miniatured remote sensing sensors have made it possible to get precise vigor assessments. Till date, only high-resolution images are considered useful in this regard. These images are subject to cost and human effort. Naturally, it would be of much practical importance to be able to achieve precise vigor assessment from openly available images, for example satellite images. The challenge here is the low resolution of such images, for instance, images acquired with the sentinel-2A instrument from ESA’s sentinel 2 mission has a resolution of 10 m. In this research we try to tap the benefit of these freely available images while considering the accuracy issues of plant vigor assessment.
Current state of the art shows the usefulness of Normalized Vegetation Index (NDVI) in relation to plant vigor assessment. It is easy to compute, and not very time-consuming for a large area, However, as the low resolution of sentinel 2 images is concerned, there is a need for rectification of NDVI values. We work around this problem with the help of some high-resolution images and regression techniques. In other words, NDVI computed from high resolution images are used to guide the vigor assessment algorithm by transfer learning.
As a case study, we used UAV images acquired in Vineyards in Spain, as part of the AI4Agriculture project. Sentinel-2 images were acquired from ESA sentinel hub in the same week of acquisition as the UAV images. As there are soil tracks between the vineyard plants, we removed the soil tracks with an unsupervised classification algorithm. The transfer learning from UAV to sentinel-2 images was achieved by means of regression techniques. After visualizing and verifying the relation between NDVI computed from Sentinel 2 images and UAV images for both soil-segmented and unsegmented sentinel-2 images, we trained several regression algorithms with these two NDVI values. A comparison between the algorithm proved the boosted regression tree to be the best to model the relationship. This activity was done as part of the AI4Agriculture project.
This regression model is delivered to the users who can use it to rectify NDVI computation for similar cases. The software is also available as a platform-independent service and also as an executable in Python programming language.
For almost 5.5 years now, Sentinel-2 provides systematic global acquisitions of high-resolution optical. Its fantastic capacity to observe the earth with a spatial resolution of 10/20/60 m combined to the spectral richness of the observations by means of 13 channels from 0.4 µm to 2.1 µm is the key success for many land and ocean applications such as vegetation monitoring, land use/land cover, water quality for instance. Used as a constellation, Sentinel 2 A and B could be used to monitor rapid evolution of the surface and allows detection of changes as fast as the data are acquired and processed in Level 2A.
Illegal gold mining occurs in the French Guyana forest for more than a century. First legal then becoming illegal, the French authorities put efforts to track Garimpeiros and make the mining activity stop. The use of Sentinel 2 data is part of the system and accelerates the detection processes used to detect illegal mines in remote forested regions.
French Guyana territory is covered by forest at 98 %, a large part of which is primeval rainforest, which is near-inaccessible. The Amazonian forest is protected and its destruction related to illegal gold mining has irreversible consequences on the environment. It causes deforestation and but also it pollutes the local water sources by toxic runoff from the mercury used to separate out the gold.
Using freely available S2A and S2B time series, and incorporating machine-learning techniques, a software tool that show suspected areas of illegal mining has been developed. In this presentation we will give an insight of the implemented methodology and show how sentinel 2 acquisitions are used in an operational context to feed the information system linked to the fight against illegal gold mining.
A novel Artificial Intelligence (AI) method based on Earth Observation (EO) data, for the identification of physical changes along the Swedish coast, especially physical constructions, such as piers and jetties is introduced. Using Sentinel-2 data in an Open DataCube (ODC) environment, we first detect the coastline using advanced convolutional (U-Net) models, then we detect the rate of change (and whether the change is permanent or temporary), lastly, we detect small constructions along the shoreline. Using Bayesian statistical inference, we are able to study time series and discern between temporary changes or noise, and permanent changes. The long-term goal is to transform the methodology into a permanent monitoring service that can help municipalities to combat environmental crime, for example to identify illegal dredging and excavation activities affecting the marine environment and ecosystem. In addition, there is an added value of a Copernicus-based tool for municipalities and regions. This will support marine coastal planning regarding the dynamics of the coastal zone and show the robustness of AI-based technology for coastal and marine research.
One of the largest threats to the vast ecosystem of the Brazilian Amazon Forest is deforestation caused by human involvement and activity. The possibility to capture, document, and monitor these degradation events has recently become more feasible through the use of freely available satellite remote sensing data and machine learning algorithms suited for big datasets.
A fundamental challenge of such large-scale monitoring tasks is the automatic generation of reliable and correct land cover and land use (LULC) maps. This can be achieved by the development of robust deep learning models that generalize well on new data. These approaches require large amounts of labeled training data. We use the latest results of the MapBiomas project as the ‘ground-truth’ for developing new algorithms. In this project, Souza et al. [1] used yearly composites of USGS Landsat imagery to classify the LULC for the whole of Brazil. Recently the most latest iteration of their work became available for the years 1985–2020 as Collection 6 (https://mapbiomas.org).
As tropical regions are often covered by clouds, radar data is be better suited for continuous mapping than optical imagery, due to its cloud-penetrating capabilities. In a preliminary study [2], we combined data from ESA’s satellite missions Sentinel-1 (radar) and Sentinel-2 (multispectral) for developing algorithms suited to act on multi-modal and -temporal data to obtain accurate LULC maps. The best proposed deep learning network, DeepForestM2, employed a seven-month radar time series together with a single optical scene. This model reached an overall accuracy (OA) of 75.0% on independent test data, compared to a trained state-of-the-art (SotA) DeepLab model with an OA of 69.9%. We are now processing more data from 2020, in addition to further developing the deep learning networks and approaches to deal with weakly supervised [3] learning arising from reference data that is inaccurate itself. We aim to improve the classification results qualitatively and quantitatively compared to SotA methods, especially with respect to generalizing well on new datasets. The resulting deep learning methods, together with the trained weights, will also be made accessible through a geoprocessing tool in Esri’s ArcGIS Pro for users without coding background.
[1] Carlos M. Souza et al. “Reconstructing Three Decades of Land Use and Land Cover Changes in Brazilian Biomes with Landsat Archive and Earth Engine”. en. In: Remote Sensing 12.17 (Jan. 2020). Number: 17 Publisher: Multidisciplinary Digital Publishing Institute, p. 2735. DOI: 10.3390/ rs12172735.
[2] Melanie Brandmeier and Eya Cherif. “Taking the pulse of the Amazon rainforest by fusing multitemporal Sentinel 1 and 2 data for advanced deep-learning”. In: EGU General Assembly 2021, online, 19–30 Apr 2021. 2021, EGU21–3749. DOI: 10.5194/egusphere-egu21-3749.
[3] Zhi-Hua Zhou. “A brief introduction to weakly supervised learning”. In: National Science Review
5.1 (Jan. 2018), pp. 44–53. ISSN: 2095-5138. DOI: 10.1093/nsr/nwx106.
Time series of satellite images provide opportunities to assess agricultural resource monitoring and deploy yield prediction models for particular types of forests and cereal crops. In such a context, one of the preliminary steps is to obtain binary land cover maps where the category of interest is well defined on a given study area whereas the other category is difficult to describe since it includes the rest of the possible land cover classes. In addition, traditional supervised classification models require labels to learn an appropriate discriminative model, and labeling each land-cover type is time-consuming and labor-intensive.
To tackle this problem of one-class classification which only requires samples of the class of interest, Positive Unlabelled Learning (PUL) is a learning paradigm in the field of machine learning particularly suited for this task. In such a setting, training data only requires one set of positive samples and one set of unlabeled samples, the latter potentially involving both positive and negative samples. There are many classification situations in which PU data settings come naturally and this is well adapted for earth observation data applications where unlabeled samples are plentiful. To the best of our knowledge, only a limited number of approaches were proposed to cope with the complexity of satellite image time series data and exploit the plethora of unlabelled samples.
Our objective is to propose a new framework named PUL-SITS (Positive Unlabelled Learning of Satellite Image Time Series) that relies on a two-step learning technique. At the first step, a recurrent neural network autoencoder is trained only on positive samples. Successively, the same autoencoder model is employed to filter out reliable negative samples from the unlabelled data based on the reconstruction error of each sample. At the second step, both labeled (positive and reliable negative) and unlabelled samples are exploited in a semi-supervised manner to build the final binary classification model.
We choose a study area located in the southwest region of France referenced as Haute-Garonne, strongly characterized by the Cereals/Oilseeds and Forest land cover classes. The entire study site is enclosed in the Sentinel-2 T31TCJ which covers an area of 4,146.2 km2. The ground truth label data is obtained from various public land cover maps published in 2019, with a total of 846,838 pixels extracted from 7,358 objects randomly sampled. Since we are addressing a positive and unlabelled learning setting, we consider two different scenarios where each one involves a particular land cover class as positive class and all the other land cover classes as negative, seeing at first (resp. second) Cereals/Oilseeds (resp. Forest) as the input positive class data gathering a sample of 898 (resp. 846) labelled objects in Haute-Garonne. The Figure attached illustrates (a) the study area location, (b) the ground truth spatial distribution and (c) the Sentinel-2 RGB composite.
To assess the quality of the proposed methodology, we design a fair evaluation protocol in which, for each experiment, we divide the data (both positive and negative classes) in two sets: training and test. Then, the training set is split again in two parts: the positive and the unlabelled set. While the former contains only positive samples, the latter consists of samples from both positive and negative classes. Whereas the amount of positive samples may influence the model behaviour, we increase the quantity of positive samples ranging in the set {20,40,60,80,100} in terms of objects.
Moreover, we provide a quantitative and qualitative analysis of our method with respect to the recent state-of-art work in Positive Unlabeled Learning for satellite images. We consider first the One-Class SVM positive classifier and then a PU method which aims to weight unlabelled samples to bias the learning stage, with the latter evaluated separately with a Random Forest and an ensemble of supervised algorithms. In addition, to disentangle the contributions from each component of our proposed semi-supervised approach, we provide two ablations study. While One-Class SVM achieves the best performance among the state-art competitors with a weighted F-Measure metric values ranging from 63.9 to 65.2 (resp. 82.7 to 87.2) for the class Cereals/Oilseeds (resp. Forest), PUL-SITS outperforms all other approaches with values ranging from 78.9 to 88.6 (resp. 91.4 to 92.9).
The shoreline is an important feature for several fields such as erosion rate estimation and coastal hazard assessment. However, its detection and delineation are tedious tasks when using traditional techniques or ground surveys, which are very costly and time consuming. The availability of remotely sensed data that provide synoptic coverage of the coastal zone and recent advances in image processing methods overcome the limits of these traditional techniques. Recent advances in artificial intelligence have led to the development of the Deep Learning (DL) algorithm that have recently emerged as a discipline used in image processing and earth sciences. Several studies have used these approaches for feature extraction via image classification, but no study has explored the potential of a DL method for automatic extraction of a sandy shoreline.
The present study implements a methodology for automatic detection and mapping of the position of the sandy shoreline. Thus, the performance of a supervised classification of multispectral images based on a convolutional neural network (CNN) model is explored. Indeed, a comparative study between several robust machine learning (ML) models, namely SVM and RF, was carried out on the basis of the accuracy of the predictive results in a micro-tidal coast such as the Mediterranean coast.
The CNN model was developed for land cover classification (4 classes), designed, trained and applied in the eCognition software, using Pleiades images. Its architectures were designed to meet our objective, which is the detection of a specific target class (wet sand class) with relatively narrow dimensions. Overall, several experiments with different sample patch sizes [(4 x 4), (8 x 8), (16 x 16) and (32 x 32)] were performed to define the number of convolutional layers. Therefore, the architecture of an input layer of (8 x 8 and 4 spectral bands), with three convolution layers and max-pooling after the first layer, was preferred. The hyper-parameters of the model were empirically tuned by a cross-validation process. The results were validated by calculating the distance between the extracted shoreline and the reference line, which was acquired in situ on the same day as the Pleiades image mission.
Overall, all the models performed quite well with an Overall Accuracy (OA) over of 85%. The SVM algorithm achieved a lowest OA coefficient of around 85.8 %, while RF and CNN have achieved 90% and 91.4% respectively. The performance of the CNN model is superior comparing to that of ML algorithms. It is noted that 76% of the extracted shoreline by the CNN model is located within 0.5m of the reference (in-situ) shoreline against 53% and 42% of the shoreline extracted by the RF and SVM algorithms respectively.
Forests hold an essential role in the planet balance on several aspect such as water supplying, biomass production and in the climate regulation. However, the alarming changing rate of Forest diversity threats its sustainability and makes tree species mapping and monitoring one of the major worldwide challenges. Despite all the deployed efforts for tree species detection, Forest inventories databases still relay on field surveys that give inconsistent data with a highly restrictive cost which is unsuitable for large scale monitoring. Earth observation satellite sensors such as LiDAR (Light Detection and Ranging) altimeters and Hyperspectral sensors would take the lead in improving the forest tree’s occupation detection by coupling surface spectral resolved data and 3D canopy information. Although some previous research carried out tree species classification using these two technologies, those studies were mainly based on high resolution Unmanned Aerial imagery (UAV imagery) instead of remote sensing satellite data.
This paper explores GEDI (Global Ecosystem Dynamics Investigation), PRISMA (Hyperspectral Precursor of the Application Mission) and MSI (MultiSpectral Instrument) Sentinel-2 potentials in tree species identification. The work baseline also reduces data processing limitations through the use of hyperspectral dimensionality reduction techniques and data augmentation approaches. Furthermore, the paper reviews machine learning algorithms and deep learning models for tree mapping. Along with those studies, we propose a supervised deep learning framework based on the Hyper3DNet CNN model to locate the major tree species within an image pixel.
Different experiments are led to first provide a performance comparison between the proposed framework and other machine learning models, and secondly report a performance comparison between different satellite imagery products. The established work plan is applied on four different region datasets (England, Spain, France and Scotland) for accuracy assessment
Results showed that hyperspectral data are critical for tree species detection, scoring a 95 % average classification accuracy. Thus, the hyperspectral profile is a robust discriminative source of information for tree species classification. Moreover, we concluded that Lidar and multispectral data unfit the automated established training approach, and that deep leaning performs better than random forest and svm classifiers which reach only a 70% average classification accuracy. Even if the study endorses the robustness of hyperspectral satellite data in tree species mapping and proves that CNN models are inadequate for lidar data, further tests with multilayer perceptrons on the laser altimeter data could be considered for a global tree species automatic discriminative solution.
Forests are a vital foundation for biodiversity, climate, and the environment worldwide. As existential habitats for numerous plant species and animals, forests are a driving factor for clean air, water, and soil. While an accelerated climate change and its impacts, such as extreme weather events, threaten forests in these functions, continuous monitoring of forest areas becomes more and more important. The relevance of managing forests sustainably is also emphasized in the Agenda 2030 of the Sustainable Development Goals, in which forests are directly linked to multiple SGD goals such as “Life on Land” or “Climate Action”. At present, however, maps of forests are often not up-to-date and detailed information about forests is often not available.
In this work, we demonstrate how Artificial Intelligence (AI), particularly methods from Deep Learning, can be used to facilitate the next generation of Earth Observation (EO)-services for forest monitoring. Relying on EO imagery from the Sentinel-2 satellites, we first discuss the importance of incorporating the multi-spectral and multi-temporal properties of this data source into Machine Learning models. Focusing on the challenge of segmenting forest types from EO imagery, we adapt and evaluate several state-of-the-art architectures from Deep Learning for this task. We investigate different architectures and network modules to integrate the high-cadence imagery (the constellation of the two Sentinel-2 satellites allows a revisit time of 5 days on average) into the Machine Learning model. In this context, we propose an approach based on Long-Short-Term-Memories that allows learning temporal relationships from multi-temporal observations. The comparison of our approach against mono-temporal approaches revealed a clear improvement in the evaluation metrics when integrating multi-temporal information.
We show how the proposed Deep Learning models can be used to obtain a more continuous forest mapping and thus provide accurate insights into the current status of forests. This mapping can complete and supplement existing forest mappings (e.g., from the Copernicus Land Monitoring Service). To that end, we provide a Deep Learning-based segmentation map of forests on a Pan-European scale at 10-meter pixel resolution for the year 2020. This novel map is evaluated on high-quality datasets from national forest inventories and the in-situ annotations from the Land Use - Cover Area Frame Survey (LUCAS) dataset. We finally outline how our approaches allow additional near-real-time monitoring applications of large forest areas outside of Europe. This work is funded by the European Space Agency through the QueryPlanet 4000124792/18/I-BG grant.
This abstract aims to highlight how a novel approach based on a deep learning segmentation model was developed and implemented to generate land cover maps by fusing multiple data sources. The solution was tailored to put greater emphasis on improving its robustness, simplifying its architecture, and limiting its dependencies.
To deal with the regional environmental, climatic, and territorial management challenges, authorities effectively need precise and frequently updated representation of the fast-changing urban-rural landscape. In 2018, the WALOUS project was launched by the Public Service of Wallonia, Belgium, to develop reproducible methodologies for mapping Land Cover (LC) and Land Use (LU) (Beaumont et al. 2021) on the Walloon region. The first edition of this project was led by a consortium of universities and research centre and lasted 3 years. In 2020, the resulting LC and LU maps for 2018, based on an object-based classification approach (Bassine et al. 2020), updated the outdated 2007 map (Baltus et al 2007) and allowed the regional authorities to meet the requirements of the European INSPIRE Directive. However, although end-users suggested that regional authorities should be able to update these maps on a yearly basis according to the aerial imagery acquisition strategy (Beaumont et al. 2019), the Walloon administration quickly realized that it does not have the resources to understand and reproduce the method because of its complexity and relatively concise handover. A new edition of the WALOUS project started in 2021 to bridge those gaps. AEROSPACELAB, a private Belgian company, was selected for WALOUS’s 2nd edition thanks to its promise to simplify and automate the LC map generation process thanks to a supervised deep learning segmentation model.
A LC map assigns to each pixel of a georeferenced raster a class describing its artificial or natural cover. Hence, the task for the model is to predict the class to associate to each pixel, resulting in a map semantically segmented. Several approaches have been suggested in the literature to solve this task. Those can often be regrouped in three main categories, each having its own strengths and weaknesses:
• Pixel-based classification
These models classify each pixel independently of their neighbors. This lack of cohesion between the classification of neighboring pixels can result in speckle or “salt and pepper” effect (Belgiu et al. 2018). Another drawback of this approach is its inference time.
• Object-based classification
The classification is done for a group of pixels simultaneously, hence reducing the speckle effect and the inference time. However, the question of how to group the pixels into homogeneous objects must now be addressed. A spatial, temporal, and spectral-based clustering algorithm has to be defined to avoid over-segmentation and under-segmentation.
• Deep Learning segmentation
Deep Learning segmentation models do not require has much feature engineering. The segmentation and classification of the pixels will be done simultaneously ensuring a strong cohesion in the resulting predictions. However, those models are prone to propose smooth object boundaries instead of the sharper ones from the other approaches. This can be seen as a drawback when segmenting artificial objects which often have clear boundaries. This is less of a concern when segmenting natural classes which transitions are less clearly defined.
The solution implemented for WALOUS’s 2nd edition revolves around a Deep Learning segmentation model based on the DEEPLAB V3+ architecture (Chen et al. 2017) (Chen et al. 2018). This architecture was selected to facilitate the segmentation of objects with different scales. Lakes, forests, and buildings are all examples of objects that can indeed be of observed with different scales on aerial imagery. Segmenting those objects existing at multiple scales can be challenging for the model as its fields-of-view might not be dimensioned appropriately. However, DEEPLAB V3+’s main distinguishing features: atrous convolutions, and atrous spatial pyramid pooling alleviate this problem without having too much impact on the inference time. This is all permitted thanks to the atrous convolutions which widen the fields-of-views without increasing the kernel’s dimensions. Slight technical adjustments have been made to this architecture to tailor it to the task: on the one hand, the segmentation head was adjusted to comply with the 11 classes representing the different ground covers, on the other hand, the input layer was altered to cope with the 5 data sources. Figure 2 offers a high-level overview of the overall architecture of the solution.
Data fusion was a key aspect of this solution as the model was trained on various sources with different spatial resolutions:
• high-resolution aerial imagery with 4 spectral bands (Red, Blue, Green, and Near-Infrared) and a ground sample distance (GSD) of 0.25m;
• digital terrain model obtained via LiDAR technology; and
• digital surface model derived from the aforementioned high-resolution aerial imagery by photogrammetry.
The pre-trained model was initially trained using WALOUS’s previous edition LC map (artificially augmented), and then a fine-tuning phase was performed on a set of highly detailed and accurate LC tiles that were manually labelled.
As many model architectures and data sources have been considered, the model was implemented with the open-source DETECTRON2 framework (Wu et al. 2019) which allows for rapid prototyping. Among these initial prototypes, a POINTREND extension (Kirillov et al. 2020) was studied to improve the segmentation of the model at objects’ boundaries, and a ConvLSTM was implemented to segment satellite imagery with high temporal and spectral resolutions such as Sentinel-2 (Rußwurm et al. 2018) and facilitate the discrimination of classes that have similar spectral signatures on a single high (spatial) resolution imagery but very distinguishable spectral signatures when sampled over a year (i.e.: softwood versus hardwood or grass cover versus agricultural parcel).
The final model segments Wallonia in 11 classes ranging from natural – grass cover, agricultural parcel, softwood, hardwood, and water – to artificial – artificial cover, artificial construction, and railway – covers. It achieves an overall accuracy of 92.29% on the test set consisting of 1710 points photo-interpreted. Figure 2 gives an overview of the various predictions (GSD: 0.25m) made by the model. Moreover, besides updating the LC map, the solution also compares the new predictions with the previous LC map and derives a change map highlighting, for each pixel, the LC transitions that may have arisen during the two studied years.
In conclusion, the newly implemented algorithm generated the new 2019 and 2020 LC maps, resampled at 1m/pixel. Those have been published in early 2022. And, although relying on less data sources and requiring less features engineering than the object-based classification model implemented for the first edition of the WALOUS project, this new approach shows similar performance. Its reduced complexity played a favorable role in its appropriation by the local authorities. Finally, the public administration will be trained to be able to make use of the AI algorithm with each new annual aerial images.
----------------------------------------
references:
Baltus, C.; Lejeune, P.; and Feltz, C., Mise en œuvre du projet de cartographie numérique de l’Occupation du Sol en Wallonie (PCNOSW), Faculté Universitaire des Sciences Agronomiques de Gembloux, 2007, unpublished
Beaumont, B.; Stephenne, N.; Wyard, C.; and Hallot, E.; Users’ Consultation Process in Building a Land Cover and Land Use Database for the Official Walloon Georeferential. 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 1–4. doi:10.1109/JURSE.2019.8808943
Beaumont, B.; Grippa, T.; Lennert, M.; Radoux, J.; Bassine, C.; Defourny, P.; Wolff, E., An Open Source Mapping Scheme For Developing Wallonia's INSPIRE Compliant Land Cover And Land Use Datasets. 2021.
Bassine, C.; Radoux, J.; Beaumont, B.; Grippa, T.; Lennert, M.; Champagne, C.; De Vroey, M.; Martinet, A.; Bouchez, O.; Deffense, N.; Hallot, E.; Wolff, E.; Defourny, P. First 1-M Resolution Land Cover Map Labeling the Overlap in the 3rd Dimension: The 2018 Map for Wallonia. Data 2020, 5, 117. https://doi.org/10.3390/data5040117
Chen, L.-C., Papandreou, G.; Schroff, F.; Adam, H., Rethinking Atrous Convolution for Semantic Image Segmentation. Cornell Univeristy / Computer Vision and Pattern Recognition. December 5, 2017.
Chen, L.-C., Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H., Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. ECCV. 2018
Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; Girshick, R., Detectron2. https://github.com/facebookresearch/detectron2. 2019.
Kirillov, A.; Wu, Y.; He, K.; Girshick, R., PointRend: Image Segmentation as Rendering. February 16, 2020.
Rußwurm, M.; Korner, M., Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders. International Journal of Geo-Information. March 21, 2018.
Belgiu, M.; Csillik, O., Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sensing of Environment. 2018, pp. 509-523.
While more and more people are pulled to cities, uncontrolled urban growth poses pressing threats such as poverty and environmental degradation. In response to these threats, sustainable urban planning will be essential. However, the lack of timely information on the sprawl of settlements is hampering urban sustainability efforts. Earth observation offers great potential to provide the missing information by detecting changes in multi-temporal satellite imagery.
In recent years, the remote sensing community has brought forward several supervised deep learning methods using fully Convolutional Neural Networks (CNNs) to detect changes in multi-temporal satellite imagery. In particular, the vast amount of high resolution (10–30 m) imagery collected by the Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) missions have been used extensively for this purpose. For example, Daudt et al. (2018) proposed a Siamese network architecture to detect urban change in bi-temporal Sentinel-2 MSI image pairs. Papadomanolaki et al. (2021) incorporated fully convolutional Long Short-Term Memory (LSTM) blocks into a CNN architecture to effectively leverage time series of Sentinel-2 MSI images. Hafner et al. (2021b) demonstrated the potential of data fusion with a dual stream network for urban change detection from Sentinel-1 SAR and Sentinel-2 MSI data.
Although these urban change detection methods achieved promising results on small datasets, label scarcity hampers their usefulness for urban change detection at a global scale considerably. In contrast to change labels, building footprint data and urban maps are readily available for many cities. Several recent efforts leveraged open urban data to train CNNs on Sentinel-2 MSI data (Qiu et al., 2020; Corbane et al., 2020) and the fusion of Sentinel-1 SAR and Sentinel-2 MSI data (Hafner et al., 2021a). In our previous work, we developed an unsupervised domain adaptation approach that leverages the fusion of Sentinel-1 SAR and Sentinel-2 MSI data to train a globally applicable CNN for built-up area mapping.
In this study, we propose a post-processing method to detect changes in time series of CNN segmentation outputs to take advantage of the outlined recent advances in CNN-based urban mapping. Specifically, a step function is employed at a 3x3 pixel neighborhood for break point detection in time series of CNN segmentation outputs. The magnitude of output probability change between the segmented time series parts is used to determine whether change occurred for a given pixel. We also replaced the monthly Planet mosaics of the SpaceNet7 dataset with Sentinel-1 SAR and Sentinel-2 MSI images (Van Etten et al., 2021), and used this new dataset to demonstrate the effectiveness of our urban change detection method. Preliminary results on the rapidly urbanizing SpaceNet7 sites indicate good urban change detection performance by our method (F1 score 0.490). Particularly compared to post-classification comparison using bi-temporal data, the proposed method achieved improved performance. Moreover, the timestamps of detected changes were extracted for change dating. Qualitative results show good agreement with the SpaceNet7 ground truth for change dating. Our future research will focus on developing end-to-end solutions using semi-supervised deep learning.
ACKNOWLEDGEMENTS
The research is part of the project ’Sentinel4Urban: Multitemporal Sentinel-1 SAR and Sentinel-2 MSI Data for Global Urban Services’ funded by the Swedish National Space Agency, and the project ’EO4SmartCities’ within the ESA and Chinese Ministry of Science and Technology’s Dragon 4 Program.
References
Corbane, C., Syrris, V., Sabo, F., Politis, P., Melchiorri, M., Pesaresi, M., Soille, P., Kemper, T., 2020. Convolutional Neural Networks for Global Human Settlements Mapping from Sentinel-2 Satellite Imagery.
Daudt, R. C., Le Saux, B., Boulch, A., 2018. Fully convolutional siamese networks for change detection. 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, 4063–4067.
Hafner, S., Ban, Y., Nascetti, A., 2021a. Exploring the fusion of sentinel-1 sar and sentinel-2 msi data for built-up area mapping using deep learning. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, IEEE, 4720–4723.
Hafner, S., Nascetti, A., Azizpour, H., Ban, Y., 2021b. Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net. IEEE Geoscience and Remote Sensing Letters.
Papadomanolaki, M., Vakalopoulou, M., Karantzalos, K., 2021. A Deep Multitask Learning Framework Coupling Semantic Segmentation and Fully Convolutional LSTM Networks for Urban Change Detection. IEEE Transactions on Geoscience and Remote Sensing.
Qiu, C., Schmitt, M., Geiß, C., Chen, T.-H. K., Zhu, X. X., 2020.A framework for large-scale mapping of human settlement extent from Sentinel-2 images via fully convolutional neural networks. Isprs Journal of Photogrammetry and Remote Sensing,163, 152–170.
Van Etten, A., Hogan, D., Manso, J. M., Shermeyer, J., Weir, N., Lewis, R., 2021. The multi-temporal urban development spacenet dataset. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6398–6407.
Large scale mapping of linear disturbances in forest areas using deep learning and Sentinel-2 data across boreal caribou herd ranges in Alberta, Canada
Ignacio San-Miguel1, Olivier Tsui1, Jason Duffe2, Andy Dean1
1 Hatfield Consultants Partnership, 200 – 850 Harbourside Drive, North Vancouver, BC, V7P 0A3, Canada
2Landscape Science and Technology Division, Environment and Climate Change Canada - 1125 Colonel By Drive, Ottawa, ON, K1A 0H3, Canada
ABSTRACT
In the Canadian boreal forest region habitat fragmentation due to linear disturbances (roads, seismic exploration, pipelines, and energy transmission corridors) is a leading cause for the decline of woodland caribou (Rangifer tarandus) – boreal population; and as a result, a deep understanding of linear disturbances (amount, spatial distribution, dynamics) has become a research and forest management priority in Canada.
Canada imposed regulatory restrictions on the density of forest habitat disturbance in woodland caribou ranges, given the species’ protection under the Species at Risk Act (SARA). To support current regulations, government agencies currently rely on manual digitization of linear disturbances using satellite imagery across very large areas. Examples of these datasets include the Anthropogenic Disturbance Footprint Canada dataset (ADFC) (Pasher et al., 2013) which was derived using visual interpretation of Landsat data to map linear disturbances across more than 51 priority herds covering millions of ha for years 2008-2010 at 30 m and for 2015 at both 30 and 15 m (using the panchromatic band); and the Human Footprint (HF) dataset (ABMI, 2017), a vector polygon layer that captures linear disturbances across a grid of 1,656 3 by 7 km sample sites (~3.5Mha) distributed across the province of Alberta and collected from 1999 to 2017. Such efforts are laudable, yet time consuming and expensive across large areas resulting in incomplete and infrequent coverage. The need for cost-effective methods to map linear disturbances in forest settings is ubiquitous.
Automated methods using machine learning are a desired alternative to enable frequent and consistent mapping of linear disturbances across large areas at a reduced cost. Recent advancements in deep learning (DL) algorithms and cloud computing represent an opportunity to bridge the gap in accuracy between methods using visual interpretation and automated methods relying on machine learning. DL algorithms explicitly account for the spatial context (in case of 2D and 3D convolutional neural networks) and can assemble more complex patterns using local and simpler patterns, which makes them particularly suitable for geometric challenges where the contextual information is relevant, like in linear disturbance detection.
Automatic extraction of roads from satellite imagery using DL is gaining increasing attention, however, to this date, most of the existing methods for the detection of linear features using remote sensing data and DL focus on urban paved roads with no methods focused on linear disturbances in forest areas (e.g., seismic lines, logging roads, pipeline corridors). Linear disturbance extraction in forest areas poses unique challenges compared to the mapping of urban paved roads, which preclude the application of current methods without adaptation. There are several unique challenges, first, the current technology was developed using very high-resolution (VHR) imagery and not high-resolution (HR) imagery like Sentinel-2. Second, linear disturbances in forest areas have very diverse types, each with its particularities, and generally the features are narrower and more irregular than paved roads. Third, linear disturbances in forested areas have different road surface conditions and surrounding vegetation cover, while those in urban settings are more homogenous.
The objective of this research is to develop and evaluate the accuracy of an automated algorithm to extract linear disturbances in forest areas across boreal caribou herd ranges in Alberta, Canada, using DL and 10m spatial resolution Sentinel-2 data. Specifically, this study explores the capacity of various Unet-inspired architectures (Unet, Resnet, Inception, Xception) coupled with transfer learning to perform pixel-level binary classification of linear disturbances.
The HF vector data set was used as training data covering 3.5Mha across Alberta for the year 2017. HF was derived using visual interpretation on SPOT-7 and ortho-imagery, thus capturing some details than are not discernible in the 10m Sentinel-2 data, which introduces some error in the training data.
DL model results are promising, with Intersection over Union (IoU) accuracies ranging from moderate-low to fair (0.3-0.5) for various types of unpaved roads and pipelines, with the finer-scale seismic lines largely undetected (IoU of 0.1). The best performing model used transfer learning using as encoder a Inceptionresnetv2 architecture with weights pre-trained on the Imagenet dataset. The main challenges identified in the accurate prediction of linear disturbances include variability in land cover conditions, occlusion and shadows caused by forested vegetation on adjacent roads, and width of the target linear disturbances, where features < 10m width go largely undetected using Sentinel-2. We discuss the trade-offs challenges and options related to evaluating model accuracy using multiple metrics and DL architectures.
This research demonstrates the potential of a cost-effective method using DL architectures coupled with Sentinel-2 data to maintain current and accurate maps of linear disturbances in highly dynamic forest areas to support caribou conservation efforts. Building upon the standardized methods proposed here, very large areas could be mapped frequently to, potentially, create a comprehensive national linear disturbance database to support decision-making for caribou habitat conservation.
Keywords— deep learning, linear disturbances, Sentinel-2, Unet, Caribou
REFERENCES
ABMI Human Footprint Inventory: Wall-to-Wall Human Footprint Inventory. 2017. Edmonton, AB: Alberta Biodiversity Monitoring Institute and Alberta Human Footprint Monitoring Program, May 2019.
Ministry of Forests Lands and Natural Resource, 2020. Digital Road Atlas - Province of British Columbia [WWW Document]. URL https://www2.gov.bc.ca/gov/content/data/geographic-data-services/topographic-data/roads (accessed 3.10.20).
Pasher, J., Seed, E., Duffe, J., 2013. Development of boreal ecosystem anthropogenic disturbance layers for Canada based on 2008 to 2010 Landsat imagery. Canadian Journal of Remote Sensing 39, 42–58.
Ronneberger, O., Fischer, P., Brox, T., 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 [cs].
Zhang, Z., Liu, Q., Wang, Y., 2018. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sensing Lett. 15, 749–753. https://doi.org/10.1109/LGRS.2018.2802944
The emergence of cloud computing services capable of storing and processing big EO data sets allows researchers to develop innovative methods for extracting information. One of the relevant trends is to work with satellite image time series, which are calibrated and comparable measures of the same location on Earth at different times. When associated with frequent revisits, image time series can capture significant land use and land cover changes. For this reason, developing methods to analyse image time series has become a relevant research area in remote sensing.
Given this motivation, the authors have developed *sits*, an open-source R pack.age for satellite image time series analysis using machine learning. The package in.corporates new developments in image catalogues for cloud computing services. It also includes deep learning algorithms for image time series analysis published in recent papers. It has innovative methods for quality control of training data. Parallel processing methods specific for data cubes ensure efficient performance. The package provides functionalities beyond existing software for working with big EO data.
The design of the *sits* package considers the typical workflow for land classification using satellite image time series. Users define a data cube by selecting a subset of an analysis-ready data image collection. They obtain the training data from a set of points in the data cube whose labels are known. After performing quality control on the training samples, users build a machine learning model and use it to classify the entire data cube. The results go through a spatial smoothing phase that removes outliers. Thus, *sits* supports the entire cycle of land use and land cover classification.
Using the STAC standard, *sits* supports the creation of data cubes from collections available in the following cloud services: (a) Sentinel-2 and Landsat-8 from Microsoft Planetary Computer; (b) Sentinel-2 images from Amazon Web Services; (c) Sentinel-2, Landsat-8, and CBERS-4 images from the BrazilDataCube(BDC); (d) Landsat-8 and Sentinel-2 collections from Digital Earth Africa; (e) Landsat-5/7/8 collections from USGS.
The package provides support for the classification of time series, preserving the full temporal resolution of the input data. It supports two kinds of machine learn.ing methods. The first group of methods does not explicitly consider spatial or temporal dimensions; these models treat time series as a vector in a high-dimensional feature space. From this class of models, sits includes random forests, support vector machines, extreme gradient boosting [1], and multi-layer perceptrons.
The second group of models comprises deep learning methods designed to work with image time series. Temporal relations between observed values in a time series are taken into account. The sits package supports a set of 1D-CNN algorithms: TempCNN [2], ResNet [3], and InceptionTime [4]. Models based on 1D-CNN treat each band of an image time separately. The order of the samples in the time series is relevant for the classifier. Each layer of the network applies a convolution filter to the output of the previous layer. This cascade of convolutions captures time series features in different time scales [2]. The authors have used these methods with success for classifying large areas [5, 6, 7].
As an example of our claim that *sits* can be used for land use and land cover change mapping, the paper by Simoes et al[7] describes an application of sits to produce a one-year land use and cover classification of the Cerrado biome in Brazil using Landsat-8 images. Cerrado is the second largest biome in Brazil with 1.9 million km2. The Brazilian Cerrado is a tropical savanna ecoregion with a rich ecosys.tem ranging from grasslands to woodlands. The Brazilian Cerrado is covered by 51 Landsat-8 tiles available in the Brazil Data Cube (BDC) [8]. The one-year classification period ranges from September 2017 to August 2018, following the agricultural calendar. The temporal interval is 16 days, resulting in 24 images per tile. The total input data size is about 8 TB. Training data consisted of 48,850 samples divided in 14 classes. The data set was used to train a TempCNN method [2]. After the classification, we applied Bayesian smoothing to the probability maps and then generated a labelled map by selecting the most likely class for each pixel. The classification was executed on an Ubuntu server with 24 cores and 128 GB memory. Each Landsat-8 tile was classified in an average of 30 min, and the total classification took about 24 h. The overall accuracy of the classification was 0.86.
The *sits* API provides a simple and powerful environment for land classification. Processing and handling large image collections does not require knowledge of parallel programming tools. The package provides support for deep learning models that have been tested and validated in the scientific literature and are not available in environments such as Google Earth Engine. The package is therefore an innovative contribution to big Earth observation data analysis.
The package is available on Github at https://github.com/e-sensing/sits. The software is licensed under the GNU General Public License v2.0. Full documentation of the package is available at https://e-sensing.github.io/sitsbook/.
References
[1] T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, (New York, NY, USA), pp. 785–794, Association for Computing Machinery, 2016.
[2] C. Pelletier, G. I. Webb, and F. Petitjean, “Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series,” Remote Sensing, vol. 11, no. 5, 2019.
[3] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Deep learn.ing for time series classification: A review,” Data Mining and Knowledge Discovery, vol. 33, no. 4, pp. 917–963, 2019.
[4] H. Fawaz, B. Lucas, G. Forestier, C. Pelletier, D. F. Schmidt, J. Weber, G. I. Webb, L. Idoumghar, P.-A. Muller, and F. Petitjean, “InceptionTime: Finding AlexNet for time series classification,” Data Mining and Knowledge Discovery, vol. 34, no. 6, pp. 1936–1962, 2020.
[5] M. Picoli, G. Camara, I. Sanches, R. Simoes, A. Carvalho, A. Maciel,
A. Coutinho, J. Esquerdo, J. Antunes, R. A. Begotti, D. Arvor, and C. Almeida, “Big earth observation time series analysis for monitoring Brazilian agriculture,” ISPRS journal of photogrammetry and remote sensing, vol. 145, pp. 328–339, 2018.
[6] M. C. A. Picoli, R. Simoes, M. Chaves, L. A. Santos, A. Sanchez, A. Soares, I. D. Sanches, K. R. Ferreira, and G. R. Queiroz, “CBERS data cube: A powerful technology for mapping and monitoring Brazilian biomes.,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. V-3-2020, pp. 533–539, Copernicus GmbH, 2020.
[7] R. Simoes, G. Camara, G. Queiroz, F. Souza, P. R. Andrade, L. Santos, A. Car.valho, and K. Ferreira, “Satellite Image Time Series Analysis for Big Earth Observation Data,” Remote Sensing, vol. 13, no. 13, p. 2428, 2021.
[8] K. Ferreira, G. Queiroz, G. Camara, R. Souza, L. Vinhas, R. Marujo, R. Simoes,
C. Noronha, R. Costa, J. Arcanjo, V. Gomes, and M. Zaglia, “Using Remote Sensing Images and Cloud Services on AWS to Improve Land Use and Cover Monitoring,” in LAGIRS 2020: 2020 Latin American GRSS & ISPRS Remote Sensing Conference, (Santiago, Chile), 2020.
One prominent application for remote sensing (RS) imagery is land use / land (LULC) cover classification. Machine learning, and deep learning (DL) in particular have been widely adopted by the community to address LULC classification problems. A particular problem class is multi-label LULC scene categorization that is set-up as a RS image scene classification problem, with DL showing excellent performance for such Computer Vision tasks.
In this work we use BigEarthNet, a large labeled dataset based on single-date Sentinel-2 patches for multi-label, multi-class LULC classification and rigorously benchmark DL models analysing their overall performance under the light of both speed (training time and inference rate) and model simplicity with respect to LULC image classification accuracy. We put to the test state-of-the-art models, including Convolution Neural Networks (CNN), Multi-Layer Perceptrons, Vision Transformers, EfficientNets and Wide Residual Networks (WRN) architectures.
In addition, we design and scale a new family of light-weight architectures with very few parameters compared to typical CNNs, based on Wide Residuals Networks that follow the EfficientNet paradigm for scaling. We propose a WideResNet model enhanced with an efficient channel attention mechanism, which achieves highest f-score in our benchmark. With respect to a ResNet50 state-of-the-art model that we use as a baseline, our model manages 4.5% higher averaged f-score classification accuracy for all 19 LULC classes, and is trained two times faster.
Our findings imply that efficient lightweight deep learning models that are fast to train, when appropriately scaled for depth, width and input data resolution, can provide comparable and even higher image classification accuracies. This is especially important in remote sensing where the volume of data coming from the Sentinel family but also other satellite platforms is very large and constantly increasing.
Papoutsis, I., Bountos, N.I., Zavras, A., Michail, D. and Tryfonopoulos, C., 2021. Efficient deep learning models for land cover image classification. arXiv preprint arXiv:2111.09451.
Illegal, unreported, and unregulated fishing vessels pose a huge risk to the sustainability of fishing stocks, marine ecosystems, and also plays a part in heightening political tensions around the globe, (Long et al., 2020) both in national and international waters. Annual global losses have an estimated value between US$10 billion to $23.5 billion, and this figure is even higher when impacts across the value chain and the ecosystems are taken into account. Illegal fishing is often organized internationally across multiple jurisdictions, and as a consequence the economic value from these catches leaves the local communities where it would otherwise belong.
The identification of illegal fishing vessels is a hard problem, that in the past required either data from and Automatic Identification System (AIS) (Longépé et al., 2018), or short range methods such as acoustic telemetry (Tickler et al., 2019). For vessel presence detection, SAR imagery has proven to be a reliable method when combined with traditional computer vision algorithms (Touzi et al., 2004; Tello et al., 2005), and more recently neural networks (Chang et al., 2019; Li et al., 2017). Its big advantage over other methods is that it is applicable in all weather conditions and does not require cooperation from the ships. The biggest hurdle in developing effective identification of illegal vessels was the lack of high resolution, reliably labeled data, as modern neural network based methods rely on the abundance of data for dependable predictions.
The newly released xView3 (xView3 Dark Vessel Detection Challenge, 2021) dataset and the complimentary challenge provides an excellent testing ground for adapting neural network based object detection methods to SAR based dark vessel detection. The open-source dataset contains over 1000 scenes of maritime regions of interest, with VV and VH SAR data from the European Space Agency’s Sentinel-1 satellites, bathymetry, wind speed, wind direction, wind quality, land/ice masks, and with accompanying hand-corrected vessel labels.
Our goal is to find an accurate and practical detection method for dark vessel identification. In order to achieve this we adapt two popular object detection architectures, Faster R-CNN (Ren et al., 2015) and YOLOv3 (Redmon et al., 2018) to the the xView3 data, together with pre- and post-processing steps. The specific architectures are chosen so that the robust high performance of Faster R-CNN can serve as a baseline, while YOLOv3 is considered a good compromise between computational complexity and performance, and so is expected to improve practical usability in near real-time use cases.
Domain specific adaptations to the architecture (such as adapting augmentation methods to SAR data, adjusting anchor sizes, and resizing parts of the network to better accommodate the fewer input channels but smaller output predictions) are expected to show a significant increase in performance, based on preliminary results and past experiments. We perform both quantitative and qualitative evaluation of the outputs, and an ablation study to quantify the effectiveness of different parts of the processing pipelines.
References:
Chang, Y.-L., Anagaw, A., Chang, L., Wang, Y. C., Hsiao, C.-Y., Lee, W.-H., 2019. Ship detection based on YOLOv2 for SAR imagery. Remote Sensing, 11(7), 786.
Li, J., Qu, C., Shao, J., 2017. Ship detection in sar images based on an improved faster r-cnn. 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), IEEE, 1–6.
Long, T., Widjaja, S., Wirajuda, H., Juwana, S., 2020. Approaches to combatting illegal, unreported and unregulated fishing. Nature Food, 1(7), 389–391.
Longépé, N., Hajduch, G., Ardianto, R., de Joux, R., Nhunfat, B.,´
Marzuki, M. I., Fablet, R., Hermawan, I., Germain, O., Subki, B. A. et al., 2018. Completing fishing monitoring with spaceborne Vessel Detection System (VDS) and Automatic Identification System (AIS) to assess illegal fishing in Indonesia. Marine pollution bulletin, 131, 33–39.
Redmon, J., Farhadi, A., 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 91–99.
Tello, M., López-Martínez, C., Mallorqui, J. J., 2005. A novel algorithm for ship detection in SAR imagery based on the wavelet transform. IEEE Geoscience and remote sensing letters, 2(2), 201–205.
Tickler, D. M., Carlisle, A. B., Chapple, T. K., Curnick, D. J., Dale, J. J., Schallert, R. J., Block, B. A., 2019. Potential detection of illegal fishing by passive acoustic telemetry. Animal Biotelemetry, 7(1), 1–11.
Touzi, R., Charbonneau, F., Hawkins, R., Vachon, P., 2004. Ship detection and characterization using polarimetric SAR. Canadian Journal of Remote Sensing, 30(3), 552–559.
xView3 Dark Vessel Detection Challenge, 2021. https://iuu. xview.us/. Accessed: 2021-11-26.
EO-AI4GlobalChange: Earth Observation Big Data and AI for Global Environmental Change Monitoring
Our planet is facing unprecedented environmental challenges including rapid urbanization, deforestation, pollution, loss of biodiversity, rising sea-level, melting glacier and climate change. During recent years, the world also witnessed numerous natural disasters, from droughts, heat waves and wildfires to flooding, hurricanes and earthquakes, killing thousands and causing billions of dollars in property and infrastructural damages. In this research, we will focus on two of the major global environmental challenges: urbanization and wildfires.
The pace of urbanization has been unprecedented. Rapid urbanization poses significant social and environmental challenges, including sprawling informal settlements, increased pollution, urban heat island, loss of biodiversity and ecosystem services, and making cities more vulnerable to disasters. Therefore, timely and accurate information on urban changing patterns is of crucial importance to support sustainable and resilient urban planning and monitoring of the UN 2030 Urban Sustainable Development Goal (SDG).
Due to human-induced climate change, the world witnessed many devastating wildfires in recent years. Hotter summers and drought across northern Europe and North America have resulted in increased wildfire activity in cooler and wetter regions such as Sweden and Siberia, even north of the Arctic Circle. Wildfires kill and displace people, damage property and infrastructure, burn vegetation, threat biodiversity, increase CO2 emission and pollution, and cost billions to fight. Therefore, early detection of active fires and near real-time monitoring of wildfire progression are critical for effective emergency management and decision support.
With its synoptic view, large area coverage at regular revisits, satellite remote sensing has been playing a crucial role in monitoring our changing planet. Earth observation (EO) satellites are now acquiring massive amount of satellite imagery with higher spatial resolution and frequent temporal revisits. These EO big data offer a great opportunity to develop innovative methodologies for urban mapping, continuous urban change detection and near real-time wildfire monitoring.
The overall objective of this project is to develop novel and globally applicable methods, based on EO big data and AI, for global environmental change monitoring focusing on urbanization and wildfire monitoring. Open and free Sentinel-1 SAR and Sentinel-2 time series will be used to demonstrate the new deep learning-based methods in selected cities around the world, and in various wildfire sites across the globe. As a fastest-growing trend in big data analytics, deep learning has been increasingly used in EO applications. Deep learning solutions for semantic segmentation work very well when there is labelled training data covering the diversity and changes that will be encountered at test time. Performance deteriorates, however, when test data is dissimilar to the labelled training data. Therefore, it is necessary to develop and build on state-of-the-art training procedures and network architectures that are better at generalizing to unseen (labelled) conditions. In this research, both semi-supervised learning with Domain Adaptation (DA) and self-supervised learning with contrastive learning have been investigated. In addition, Transformer network is also being investigated for its ability to enable long range attention, thus making the Transformer encoder powerful to process sequence data.
For urban mapping, the results show that the Domain Adaptation (DA) approach with fusion of Sentinel-1 SAR and Sentinel-2 MSI data can produced highly detailed built-up extraction with improved accuracy over sixty sites around the world. For continuous change detection, a transformer network is being investigated using SpaceNet-7 dataset and the SpaceNet-7 winner’s solution will be compared with our transformer-based solution. For wildfire monitoring, both on the fly training and semi-supervised transfer learning trained on burned areas in Canada and U.S. have been implemented. Validations are being conducted in 2021 major wildfires in Greece, British Columbia, Canada and California, U.S. are being compared. The results will be presented at the Living Planet Sympsoium.
This research aims to contribute to 1) advance EO science, technology and applications beyond the state of the art, 2). Provide timely and reliable urban information to support sustainable and resilient planning, 3) effective emergency management and decision support during wildfires, 4) measuring and monitoring several indicators for the UN SDG 11: Sustainable Cities and Communities, SDG13: Climate Action and SDG 15: Life on Land.
Our understanding of the Earth´s functional biodiversity and its imprint on ecosystem functioning is still incomplete. Large-scale information on functional ecosystem properties (‘Plant Traits’) is thus urgently needed to assess functional diversity and better understand biosphere-environment interactions. Optical remote sensing and particularly hyperspectral data offer a powerful tool to map these biophysical properties. Such data enable repeatable and non-destructive measurements at different spatial and temporal scales over continuous narrow bands and using numerous platforms and sensors. The advent of the upcoming space-borne imaging spectrometers will provide an enormous amount of data that opens the door to explore data driven methods for processing and analysis. However, we are still lacking until now efficient and accurate methods to translate hyperspectral reflectance into information on biophysical properties across plant types, environmental gradients and sensor types. In this regard, Deep Learning (DL) techniques are revolutionizing our capabilities to exploit large data sets given their flexibility and efficiency to detect features and their complex and hierarchical relationships. Accordingly, it is expected that Convolutional Neural Networks (CNNs) have the potential to provide transferable predictive models of biophysical properties at the canopy scale from spectroscopy data. On the other side, the absence of globally representative data sets and the gap between the available reflectance data and the corresponding in-situ measurements are reasons that hampered such analyses until now. In recent years, several initiatives from the scientific community (e.g. EcoiSIS) have contributed to provide a constantly growing source of data of hyperspectral reflectance and plant trait encompassing different plant types and sensors. However, such data are sparse to fit whatever model because of missing values. In the present study, we demonstrate a weakly supervised approach to enrich these data sets using gap filling strategies. Based on this data, we investigate different multi-output Deep Learning (DL) architectures in a form of an end-to-end workflow that predicts multiples biophysical properties at once. Based on 1D-CNN the model exploits the internal correlation between multiple traits and hence improves predictions. In the study, we target a various set of plant properties from pigments, structural traits (e.g. LAI), water content, nutrients (e.g. Nitrogen) and Leaf mass area (LMA). The preliminary results of the mapping model cross a broad range of vegetation types (Crops, Forest, Tundra, Grassland) are promising and outcompete the performance of shallow machine learning approaches (e.g. Partial Least Squares Regression (PLSR), Random Forest Regression) that can only predict individual traits. The model learned distinguishable and generalized features despite of the high variability in the used data sets. The key contribution of this study is to highlight the potential of weakly supervised approaches together with Deep Learning to overcome the scarcity of in-situ measurements and take a step forward in creating efficient predictive models of multiple Earth’s biophysical properties.
Global-scale maps provide a variety of ecologically relevant environmental variables to researchers and decision makers. Usually, these maps are created by training a machine learning algorithm on field-sampled reference samples and the application of the resulting model to associated remote sensing based information from satellite imagery or globally available environmental predictors. This approach is based on the assumption, that the predictors are a representation of the environment and that the machine learning model can learn the statistical relationships between the environment and the target variable from the reference data.
Since field samples are often sparse and clustered in geographic space, machine learning based mapping requires, that models are transferred to regions where no training samples are available. Further, machine learning models are prone to overfit to the specific environments they are trained on, which can further contribute to poor model generalization. Consequently, model validations have to include an analysis of the models transferability in regions where no training samples are available e.g. by computing the Area of Applicability (AOA, Meyer and Pebesma 2021).
Here we present a workflow to optimize the transferability of machine learning based global spatial prediction models. The workflow utilizes spatial variable selection in order to train generalized models which include only predictors that are most suitable for predictions in regions without training samples.
To evaluate the proposed workflow we reproduced three recently published global environmental maps (global soil nematode abundances, potential tree cover and specific leaf area) and compared the outcomes to the original studies in terms of prediction performance. We additionally assessed the transferability of our models based on the AOA and concluded that by reducing the predictors to those relevant for spatial prediction, we could greatly increase the AOA of the models with negligible decrease of the prediction quality.
Literature:
Meyer, H. & Pebesma, E. Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution 2041–210X.13650 (2021) doi:10.1111/2041-210X.13650.
Machine learning algorithms have become very popular for spatial mapping of the environment, even on a global scale. Model training is usually based on limited field observations and the trained model is applied to make predictions far beyond the geographic location of these data – assuming that the learned relationships still hold. However, while the algorithms allow fitting complex relationships, this comes with the disadvantage that trained models can only be applied to new data if these resemble the training data. Assuming that new geographic space often goes along with new environmental properties, this can often not be ensured and predictions for unsampled environments have to be considered highly uncertain.
We suggest a methodology that delineates the ‘area of applicability’ (AOA) that we define as the area where we enabled the model to learn about relationships based on the training data, and where the estimated cross-validation performance holds. We first propose a ‘dissimilarity index’ (DI) that is based on the minimum distance to the training data in the multidimensional predictor space, with predictors being weighted by their respective importance in the model. The AOA is derived by applying a threshold which is the maximum DI of the training data derived via cross-validation. We further use the relationship between the DI and the cross-validation performance to map the estimated performance of predictions. To illustrate the approach, we present a simulated case study of biodiversity mapping and compare prediction performance inside and outside the AOA.
We suggest to add the AOA computation to the modeller's standard toolkit and to limited predictions to this area. The (global) maps that we create using remote sensing, field data and machine learning, are not just nice colorful figures but they are also being distributed digitally, often as open data, and are used for purposes of decision-making or planning, e.g. in the context of nature conservation, with high requirements on the quality. To avoid large error propagation or misplanning, it should be the obligation of the map developer to clearly communicate the limitations, towards more reliable EO products.
Within the past decade, modern statistical and machine learning methods significantly advanced the field of computer vision. For a significant portion, success stories trace back to training deep artificial neural networks on massive amounts of labeled data. However, generating human labor-intensive annotations for the ever-growing volume of earth observation data at scale renders Sysiphus-like.
In the realm of weakly-supervised learning, methods operating on sparse labels attempt to exploit a small set of annotated data in order to train models for inference on the full domain of input. Our work presents a methodology to utilize high resolution geospatial data for semantic segmentation of aerial imagery. Specifically, we exploit high-quality LiDAR measurements to automatically generate a set of labels for urban areas based on rules defined by domain experts. The top of the figure attached provides a visual sample for such automatized classifications in suburbs: vegetation (dark madder purple), roads (lime green), buildings (dark green), and bare land (yellow).
A challenge to the approach of auto-generated labels is introduction of noise due to inaccurate label information. Through benchmarks and improved architecture design of the deep artificial neural networks, we provide insights on success and limitations of our approach. Remarkably, we demonstrate that models trained on inaccurate labels have the ability to surpass annotation quality when referenced to ground truth information (cf. bottom of figure attached).
Moreover, we investigate boosting of results when weak labels get auto-corrected by domain expert-based noise reduction algorithms. We propose technology interacting with deep neural network architectures that allows human expertise to re-enter weakly supervised learning at scale for semantic segmentation in earth observation. Beyond the presentation of results, our contribution @LPS22 intends to start a vital scientific discussion on how the approach substantiated for LiDAR-based automatic annotation might get extended to other modalities such as hyper-spectral overhead imagery.
The estimation of Root-Zone Soil Moisture (RZSM) is important for meteorological, hydrological and mainly agricultural applications. For instance, RZSM constitutes the main reservoir for the crops. Moreover, the knowledge of this soil moisture component is crucial for the study of geophysical processes such as water infiltration and evaporation. Remote sensing techniques, namely active and passive microwave, can retrieve surface soil moisture (SSM). However, no current spaceborne sensor can directly measure RZSM because of their shallow penetration depth. Proxy observations like water storage change or vegetation stress can help retrieve spatial maps of RZSM. Land surface models (LSM) and data assimilation techniques can be also used to estimate RZSM. In addition to these methods, data-driven methods have been widely used in hydrology and precisely in RZSM prediction. In a previous study (Souissi et al. 2020), we demonstrated that Artificial Neural Networks (ANN) can be used to derive RZSM from SSM solely. But we also found limitations in very dry regions where there is a disconnection between surface and root zone because of high evaporation rates.
In this study, we investigated the use of surface soil moisture and process-based features in the context of ANN to predict RZSM. The infiltration process was taken into account as a feature through the use of the recursive exponential filter and its soil water index (SWI). The recursive exponential filter formulation has been widely used to derive root zone soil moisture from surface soil moisture as an approximation of a land surface model. Here, we use it only to derive an input feature to the ANN.
As for the evaporation process, we integrated a remote sensing-based evaporative efficiency variable in the ANN model. A very popular formulation of this variable, defined as the ratio of actual to potential soil evaporation, was introduced in (Noilhan and Planton, 1989) and (Lee and Pielke, 1992). We based our work on a new analytical expression, suggested for instance in (Merlin et al., 2010), and replaced potential evaporation by potential evapotranspiration that we extracted from the Moderate Resolution Imaging Spectroradiometer (MODIS) Evapotranspiration/Latent Heat Flux product.
The vegetation dynamics were considered through the use of remotely sensed Normalized Difference Vegetation Index (NDVI) from MODIS.
In-situ surface soil temperature, provided by the International Soil Moisture Network (ISMN), was used. Different ANN models were developed to assess, each, the impact of the use of a certain process-based feature in addition to SSM information. The training soil moisture data is provided by the ISMN and is distributed over several areas of the globe of different soil and climate parameters. An additional test was conducted using soil moisture sensors not integrated to the ISMN database, over the Kairouan Plain which is a semi-arid region in central Tunisia covering an area of more than 3000 km2 and part of the Merguellil watershed.
The results show that the RZSM prediction accuracy increases in specific climate conditions depending on the used process-based features. For instance, in arid areas where ‘Bwh’ climate class (arid desert hot) is prevailing like eastern and western sides of the USA and bare areas of Africa, the most informative feature is evaporative efficiency. In areas of continental Europe and around the Mediterranean Basin where there are agricultural fields, NDVI is for example the most relevant indicator for RZSM estimation.
The best predictive capacity is given by the ANN model where surface soil moisture, NDVI, recursive exponential filter and evaporative efficiency are combined. 61.68% of the ISMN test stations undergo an increase in correlation values with this model compared to the model using only SSM as inputs. The performance improvement can be also highlighted through the example of the Tunisian sites (five stations). For instance, the mean correlation of the predicted RZSM based on SSM only strongly increases from 0.44 to 0.8 when process-based are integrated into the ANN model in addition to SSM.
The ability of the developed model to predict RZSM over larger areas will be assessed in the future.
To monitor the forests and estimate the above-ground biomass in national to global scale, remote sensing data have been widely used. However, due to their coarse resolution (hundreds of trees present within one pixel), it’s costly to collect the ground reference data. Thus, an automatic biomass estimation method on individual tree level using high-resolution remote sensing data (such as Lidar data) is of great importance. In this paper, we explored to estimate tree’s biomass from single parameter – the tree height – using Gaussian process regressor. We collected a dataset of 8342 records, in which individual tree’s height (in m), diameter (in cm), and the biomass (in Kg) are measured. Besides, Jucker data with crown diameter measurement are also used. The datasets coverage eight dominant biomes. Using the data, we compared five candidate biomass estimation models, including three single-parameter biomass-height models (proposed Gaussian process regressor, random forest, and linear model in log-log scale) and two two-parameter models (biomass-height-crown diameter model, and biomass-height-diameter model). Results showed a high correlation between biomass and height as well as diameter, and the biomass-height-diameter model has low biases of 0.08 and 0.11, and high R-square scores of 0.95 and 0.78 when using the two datasets respectively. The biomass-height-crown diameter has a median performance with R-square score of 0.66, bias of 0.26, and root mean square error of 1.11Mg. Although the biomass-height models are less accurate, the proposed Gaussian regressor has a better performance over linear log-log model and random forest (R-square: 0.66, RMSE: 4.95 Mg; bias: 0.34). Besides, the results also suggest that non-linear models have an advantage over linear model on reducing the uncertainty either when the tree has a large (> 1 Mg) or small (< 10 kg) biomass.
Satellite radar altimetry is a powerful technique for measuring sea surface height variations. It has a wide range of applications in, e.g., operational oceanography or climate research. However, coastal sea-level change from satellite altimetry is challenging due to land influence on the estimated sea surface height (SSH), significant wave height (SWH), and backscatter. There exist various algorithms which allow retrieving meaningful estimates up to the coast. The Spatio Temporal Altimetry Retracker (STAR) algorithm partitions the total return signal into individual sub-signals, which are then processed, leading to a point cloud of potential estimates for each of the three parameters which tend to cluster around the true values, e.g., the real sea surface. The STAR algorithm interprets each point cloud as a weighted directed acyclic graph (DAG). The spatiotemporal ordering of the potential estimates induces a sequence of connected vertex layers, where each layer is fully connected to the next with weighted edges. The edge weights are based on a chosen distance measure between the vertices, i.e., estimates. Finally, the STAR algorithm selects the estimates by searching the shortest path through the DAG using forward traversal in topological order. This approach includes the inherent assumption that neighboring SSH, SWH, and backscatter estimates should be similar. A significant drawback of the original STAR approach is that the point clouds for the three parameters, SSH, SWH, and backscatter, can only be treated individually since the applied standard shortest path approach can not handle multiple edge weights. Hence, the output of the STAR algorithm for each parameter does not necessarily correspond to the same sub-signal, which prevents the algorithm from providing physically mutually consistent estimates of SSH, SWH, and backscatter. With mSTAR, we find coherent estimates that take the weightings of two or three point clouds into account by employing multicriteria shortest paths computation. An essential difference between the single and multicriteria shortest path problems is that there are, in general, a multitude of Pareto-optimal solutions in the latter. A path is Pareto-optimal if there is no other path that is strictly shorter for all criteria. The number of Pareto-optimal paths can be exponential in the input size, even if the considered graph is a DAG. There are different common ways to tackle this complexity issue. A simple approach is to use the weighted sum scalarization method. The objective functions are weighted and combined to a single objective function, such that a single criteria shortest path algorithm can find a Pareto-optimal path. However, even though different Pareto-optimal solutions can be obtained by varying the weights, it is usually impossible to find all Pareto-optimal solutions this way. In order to find all Pareto-optimal paths, label-correcting or label-setting algorithms can be used, which can also be speed-up using various approximation techniques. The mSTAR framework supports scalarization and labeling techniques as well as exact and approximate algorithms for computing Pareto-optimal paths. This way, mSTAR can find multicriteria consistent estimates of SSH, SWH, and backscatter.
A full spatial coverage of albedo data is necessary for climate studies and modeling, but clouds and high solar zenith angle cause missing values to the optical satellite products, especially around the polar areas. Therefore, we developed monthly gradient boosting (GB) method based gap filling models. We aim to apply them to the Arctic sea ice area of the 34 years long albedo time series CLARA-A2 SAL (Surface ALbedo from the CLoud, Albedo and surface RAdiation data set) of the Satellite Application Facility on Climate Monitoring (CM SAF) project. GB models are used to fill missing data in albedo 5-day (pentad) means using albedo monthly mean, brightness temperature, and sea ice concentration data as model inputs. Monthly GB models produce the most unbiased, precise, and robust estimates when compared to alternative estimates (monthly mean albedo values directly or estimates from linear regression). The mean relative differences between GB based estimates and original non gapped pentad values vary from -20% to 20% (RMSE being 0.048), compared to relative differences varying from -20% to over 60% (RMSE varying from 0.054 to 0.074) between other estimates and original non gapped pentad values. Also, when comparing estimates from GB models to estimates from linear regression models over three smaller Arctic sea ice areas with varying annual surface albedo cycle (Hudson Bay, Canadian Archipelago and Lincoln Sea), albedo of the melting sea ice is predicted better by the GB models (with negligible mean differences). Gradient boosting is therefore a useful method to fill gaps in the Arctic sea ice area, and the brightness temperature and sea ice concentration data provide useful additional information to the monthly models.
The occurrence of hazard events, such as floods, has recognized ecological and socioeconomic consequences for affected communities. Geospatial resources, including satellite-based synthetic aperture radar (SAR) and optical data, have been instrumental in providing time-sensitive information about the extent and impact of these events to support emergency response and hazard management efforts. In effect, finite resources can be better optimized to support the needs of often extensively affected areas. However, the derivation of SAR-based flood information is not without its challenges and inaccurate flood detection can result in non-trivial consequences. Consequently, in addition to segmentation maps, the inclusion of quantified uncertainties as easily interpretable probabilities can further support risk-based decision-making.
This pilot study presents the first results of two probabilistic convolutional neural networks (CNNs) adapted for SAR-based water segmentation with freely available Sentinel-1 Interferometric Wide (IW) swath Ground Range Detected (GRD) data. In particular, the performance of a variational inference-based Bayesian convolutional neural network (BCNN) is evaluated against that of a Monte Carlo Dropout Network (MCDN). MCDN has been more commonly applied as an approximation of Bayesian deep learning. Here we highlight the differences in the uncertainties identified in both models, based on the evaluation of an extended set of performance metrics to diagnose data and model behaviours and to evaluate ensemble outputs at tile- and scene-levels.
Since the understanding of uncertainty and subsequent derivation of uncertainty information can vary across applications, we demonstrate how uncertainties derived from ensemble outputs can be integrated into maps as a form of actionable information. Furthermore, map products are designed to reflect survey responses shared by end users from regional and international organizations, especially those working in emergency services and as operations coordinators. The findings of this study highlight how the consideration of both segmentation accuracy and probabilistic performance can build confidence in products used to make informed decisions to support emergency response within flood situations.
Understanding how regions of ice sheet damage are changing, and how their presence alters the physics of glaciers and ice shelves, is important in determining the future evolution of the Antarctic ice sheet. Ice dynamic processes are responsible for almost all (98%) of present day ice mass loss in Antarctica (Slater et al 2021), with ice fracturing and damage now known to play an important role in this process (Lehrmitte et al, 2021). Though progress has been made, damage processes are not well integrated into realistic (as opposed to highly idealized) ice sheet models, and quantitative observations of damage are sparse.
In this study we use a UNet (similar to Lai et al 2020) to automatically map crevasse-type features over the whole Antarctic coastline, using the full archive of synthetic aperture radar (SAR) imagery acquired by Sentinel-1. SAR data is well suited to the task of damage detection as acquisitions are light- and weather-independent, and C-band radar can penetrate 1−10m into the snow-pack, depending on its composition, revealing the presence in snow-bridged crevasses. Our small version of UNet, trained on a sparse dataset of linear features, provides a pixel-level damage score for each Sentinel-1 acquisition. From this we produce an Antarctic-wide map of damage every 6 days, at 50m resolution. This dataset is used to measure the changing structural properties of both the grounded ice sheet, and floating ice shelves of some of the largest glaciers in the world.
Due to the slow rate of change of the Antarctic ice sheet, simulations of its evolution over century timescales can be sensitive to errors in the prescribed initial conditions. We use our observations of damage to provide a more robust estimate of the initial state of the Antarctic ice sheet using the BISICLES ice sheet model. This type of model requires both an initial ice geometry, which can be observed directly, and model parameters: basal slipperiness C(x,y) and effective viscosity μ(x,y), which cannot. Both C(x,y) and μ(x,y) are typically found by solving an inverse problem, which is undetermined. We use the damage observations to regularize the inverse problem by providing constraints on μ(x,y). This represents a step change in reducing the under-determinedness of the inverse problem, giving us higher confidence in the initial conditions provided for simulations of the ice sheet as a whole.
[1] Lai, C.-Y., Kingslake, J., Wearing, M. G., Chen, P.-H. C., Gentine, P., Li, H., Spergel, J. J., and van Wessem, J. M.: Vulnerability of Antarc-tica’s ice shelves to meltwater-driven fracture, Nature, 584, 574–578, 2020.
[2] Lhermitte, S., Sun, S., Shuman, C., Wouters, B., Pattyn, F., Wuite, J., Berthier, E., and Nagler, T.: Damage accelerates ice shelf in-stability and mass loss in Amundsen Sea Embayment, Proceedings of the National Academy of Sciences, 117, 24 735–24 741,https://doi.org/10.1073/pnas.1912890117, 2020.
[3] Slater, T., Lawrence, I. R., Otosaka, I. N., Shepherd, A., Gourmelen, N., Jakob, L., Tepes, P., Gilbert, L., and Nienow, P.: Earth’s ice imbalance, The Cryosphere, 15, 233–246, 2021.
In high mountain regions such as the Swiss Alps, the expansion of forest towards high altitudes is limited by extreme climatic conditions, particularly related to low temperatures, thunderstorms or snow deposition and melting [1]. All these factors, together with human land use planning, shape the upper forest limit, which we refer to as the alpine treeline. The complex topography of such regions and the interplay of a large number of drivers makes this boundary highly fragmented. Remote sensing-based land cover products tend to oversimplify these patterns due to insufficient resolution or to a need for excessively labor-intensive labeling. When higher resolution imagery is available, the accuracy of automated forest mapping methods tends to drop close to the treeline due to fuzzy forest boundaries and lower image quality caused by complex topography [3]. High- resolution maps of forest that are specifically tailored for the treeline ecotone are thus needed to accurately account for this complexity.
Mapping forest implies formulating a clear definition of forest. A large number of such definitions exist, most of them based on tree height and tree canopy density thresholds, but also spatial criteria (area, width/length), as well as structural form (e.g. shrubs) and land use. The position of the treeline can vary greatly depending on the chosen definition. While traditional machine learning methods are able to reach high accuracy with respect to the training labels, they do not provide additional information about underlying relevant variables and how they relate to the final map. For this reason, they are often referred to as ‘black boxes’. The results of such models are implicitly linked to a forest definition through the training labels, if those are accurate enough and based on a fixed definition, but spatially-explicit and disentangled concepts are missing to explain the model’s decisions in terms of forest definition.
To tackle the high-altitude forest mapping task, we propose a deep learning-based semantic segmentation method which uses optical aerial imagery at 25 cm resolution over the 1500-2500 m a.s.l. altitude range of the Swiss Alps and forest masks from the SwissTLM3D landscape model, which provides a spatially explicit, detailed characterization of different types of forest [2]. After proper training, the model yields a fine-grained binary forest/non-forest map, and is also able to classify the forest into three types (open forest, closed forest, shrub forest), despite noisy labels and heavy class imbalance. We obtain an overall f-1 score above 90% with respect to the SwissTLM3D labels both for the binary task and when including the forest type classification into the task.
From this baseline model, we then developed an interpretable model which estimates intermediate forest definition variables for each pixel, explicitly applies a target forest definition and highlights systematic discrepancies between the target forest definition and the noisy training labels. These pixel-level explanations complement the resulting forest map, making the model’s decision process more transparent and closely related to relevant and widely-used variables characterizing Swiss forests.
References
[1] George P. Malanson, Lynn M. Resler, Maaike Y. Bader, Friedrich-Karl Holtmeier, David R. Butler, Daniel J. Weiss, Lori D. Daniels, and Daniel B. Fagre. Mountain Treelines: A Roadmap for Research Orientation. Arctic, Antarctic, and Alpine Research, 43(2):167–177, 5 2011.
[2] Swisstopo. SwissTLM3D. https://www.swisstopo.admin.ch/en/geodata/landscape/ tlm3d.html, 2021. [Online; accessed 04.11.2021].
[3] Lars Waser, Christoph Fischer, Zuyuan Wang, and Christian Ginzler. Wall-to-Wall Forest Mapping Based on Digital Surface Models from Image-Based Point Clouds and a NFI Forest Definition. Forests, 6(12):4510–4528, 12 2015.
This work explores cloud detection on time series of Earth observation satellite images through deep learning methods. In the past years, machine learning based techniques have demonstrated excellent performance in classification tasks compared with threshold-based methods using spectral characteristics of satellite images [1]. In this study, we use MSG/SEVIRI data acquired during one year with a 15-min temporal resolution on 13 landmarks distributed in different geographic locations with diverse properties and scenarios. In particular, we implement an end-to-end deep learning network, which consists of a U-Net segmentation CNN network [2] coupled to a long short-term memory (LSTM) layer [3], called ConvLSTM [4]. The network design aims to exploit the spatial information contained in the images and the temporal dynamics of the time series simultaneously to provide state-of-the-art classification results. Regarding the experimental results, we address several related problems. On the one hand, we provide a comparison of the proposed network with other standard baselines such as an ensemble of SVM [5] and other recurrent models such as convRNN [6]. On the other hand, we want to validate the robustness of the proposed method by training with data from all the available landmarks except the landmark used for the evaluation. Then, the network is fine tuned to measure its generalization and global fitness through the impact on the performance metrics. Other secondary objectives of the work consist of evaluating different training strategies of the implemented model through architecture modifications, e.g. measuring the impact of removing the batch normalization layers. Moreover, we have evaluated two different strategies for training the ConvLSTM. The standard way consists in training from scratch the full network at once. However, we achieve better performance with a two-phase training, i.e. training first the CNN part and then training the full network from the CNN weights in an end-to-end manner. Provided results show interesting insights about the nature of the image time series and its relation to network architecture and training.
Keywords: convolutional neural networks, CNN, LSTM, landmarks, MSG/SEVIRI, cloud detection.
Acknowledgements: This work was supported by the Spanish Ministry of Science and Innovation under the project PID2019-109026RB-I00.
References
[1] L. Gomez-Chova, G. Camps-Valls, J. Calpe, L. Guanter, and J. Moreno, “Cloud-screening algorithm for ENVISAT/MERIS multispectral images,” IEEE Trans. on Geoscience and Remote Sensing, vol. 45, no. 12, Part 2, pp. 4105–4118, Dec. 2007.
[2] Mateo-García, G., Adsuara, J. E., Pérez-Suay, A., & Gómez-Chova, L. (2019, July). Convolutional Long Short-Term Memory Network for Multitemporal Cloud Detection Over Landmarks. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium (pp. 210-213). IEEE.
[3] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in MICCAI. Oct. 2015, Lecture Notes in Computer Science, pp. 234–241, Springer, Cham.
[4] Sepp Hochreiter and Jurgen Schmidhuber, “Long short-term ¨ memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[5] Pérez-Suay, A., Amorós-López, J., Gómez-Chova, L., Muñoz-Marí, J., Just, D., & Camps-Valls, G. (2018). Pattern recognition scheme for large-scale cloud detection over landmarks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(11), 3977-3987.
[6] Turkoglu, M. O., D'Aronco, S., Perich, G., Liebisch, F., Streit, C., Schindler, K., & Wegner, J. D. (2021). Crop mapping from image time series: deep learning with multi-scale label hierarchies. arXiv preprint arXiv:2102.08820.
Recently several groups have put significant effort to release consistent time-series data sets to represent our environmental history. Example include HILDAplus GLOBv-v1.0 land cover time series dataset (https://doi.org/10.1594/PANGAEA.921846), MODIS-AVHRR NDVI time-series 1982–2020 monthly values, TMF long-term (1990–2020) deforestation and degradation in tropical moist forests (https://forobs.jrc.ec.europa.eu/TMF/), TerraClimate (monthly historic climate) precipitation, mean, minimum, maximum temperature and snow cover, (http://www.climatologylab.org/terraclimate.html); DMSP NTL time-series data (1992–2018) at 1-km spatial resolution (https://doi.org/10.6084/m9.figshare.9828827.v2); Hyde v3.2: land use annual time-series 1982–2016 (occurrence fractions) at 10 km resolution (https://doi.org/10.17026/dans-25g-gez3), Vegetation Continuous Fields (VCF5KYR) Version 1 dataset (https://lpdaac.usgs.gov/products/vcf5kyrv001/), Daily global Snow Cover Fraction - viewable (SCFV) from AVHRR (1982 - 2019), version 1.0 (https://climate.esa.int/en/odp/#/project/snow), WAD2M global dataset of Wetland Area. We have combined, harmonized, gap-filled, and where necessary downscaled these datasets to produce a Spatiotemporal Earth-Science data Cube at 1-km resolution 1982-2020 hosted as Cloud-Optimized GeoTIFFs via our www.OpenLandMap.org data portal. The data set covers all the land on the planet and could be useful for any researcher working on modelling parts of the earth system in the time frame 1982-2020.
We discuss the process of generating this data Cube. We show examples of using geospatial packages like gdal and python rasterio to generate harmonized datasets. We discuss the feature engineering that was done to enhance the final product and demonstrate uses of this data for Spatiotemporal Machine Learning i.e. for fitting models to predict dynamic changes in target variables. For feature engineering we make use of the python package eumap and optimize the process of computing features for large datasets. Eumap implements a parallelization approach by dividing large geospatial datasets in tiles and distributing the calculation per tile. In this way we are able to quickly generate new features from large datasets, ultimately helping machine learning models to find patterns in the data. The focus here will be on generating features that help our efforts to make the data cube useful for modelling systems that are influenced by processes that take multiple decades to develop like accumulated values for land use classes.
To exemplify the usefulness of this data for processes that are subject to time frames of decades we present a case study where we model soil organic carbon globally. Especially the benefit of using features generated from long term land cover datasets such as HYDE and HILDA and combining them with reflectance for machine learning approaches will be discussed.
Finally, we hope this example of a harmonized and open source dataset can inspire more researchers to present data in a systematic and open source manner in the future.
In the scope of remote sensing retrieval techniques, methods based on deep learning algorithms have gained an important place in the scientific community. Multi-Layer Perceptron (MLP) Neural Networks (NN) have proven to provide good estimates of atmospheric parameters and to be more performant than classical retrieval methods – e.g. Optima Estimation Method (OEM) – in terms of computational cost and processing of non-lineal models.
However, the most important drawback of current classical MLP techniques is that they do not provide uncertainty information on the retrieved parameters. In the atmospheric retrieval challenge, not only the quantitative value of the computed parameter is important, but also the incertitude associated with this estimation. The latter is essential for the exploitation of scientific products, for example, its utilisation in analyse/forecasting systems of the atmospheric composition or dynamics. In order to come up with a solution to the incertitude estimation issue, new MLP NNs have been recently developed – e.g. Bayesian Neural Networks (BNN) and Quantile Regression Neural Networks (QRNN) –.
The French National Centre of Spatial Studies (CNES) is therefore interested in developing and proving the feasibility of NN methods for the modelling of the incertitude associated with atmospheric variables, and more specifically, in the retrieval of greenhouse gases – e.g. CO2 content – obtained from infrared hyperspectral sounding instruments such as IASI, IASI-NG or OCO-2.
To this end, a QRNN (Quantile Regression Neural Network) has been implemented in order to estimate the mid-tropospheric CO2 distribution probabilities for a synthetic set of brightness temperatures corresponding to selected channels of IASI and AMSU. These sets are representative of a wide range of atmospheric situations in the tropical zones of the globe, including extreme events.
The present QRNN is then able to retrieve the predicted probability intervals of the tropical mid-tropospheric CO2 column – in this case, 11 quantiles positions ranging from 0.05 to 0.95 –. Validations show a robust and well-calibrated neural network with an accurate retrieval of the CO2 content and coherent associated incertitude estimation for a wide set of brightness temperatures corresponding to a CO2 range between 396 and 404 ppmv. Indeed, the implemented QRNN is able to associate a greater uncertainty to the most biased CO2 estimations. This performance criteria is of great importance for later applications that take advantage of retrieval/inversion products, allowing for the filtering of the doubtful – i.e. uncertain – estimates and thus the obtaining of more accurate results – e.g. better assimilation products –.
Emulation of synthetic hyperspectral Sentinel-2-like images using Neural Networks
Miguel Morata, Bastian Siegmann, Adrian Perez, Juan Pablo Rivera Caicedo, Jochem Verrelst
Imaging spectroscopy provides unprecedented information for the evaluation of the environmental conditions in soil, vegetation, agricultural and forestry areas. The use of imaging spectroscopy sensors and data is growing to maturity with research activities focused on proximal, UAV, airborne and spaceborne hyperspectral observations. However, presently there are only a few hyperspectral satellites in operation. An alternative approach to approximate hyperspectral images acquired from space is to emulate synthetic hyperspectral data from multi-spectral satellites such as Sentinel-2 (S2). The principle of emulation is approximating the input-output relationships by means of a statistical learning model, also referred to as emulator (O’Hagan 2006, Verrelst et al., 2016). Emulation recently emerged as an appealing acceleration technique in processing tedious imaging spectroscopy applications such as synthetic scene generation (Verrelst et al., 2019) and in atmospheric correction routines. The core idea is that once the emulator is trained, it allows generating synthetic hyperspectral images consistent with an input multispectral signal, and this at a tremendous gain in processing speed. Emulating a synthetic hyperspectral image from multi-spectral data is challenging because of its one-to-many input-output spectral correspondence. Nevertheless, thanks to dimensionality reduction techniques that take advantage of the spectral redundancy, the emulator is capable of relating the output hyperspectral patterns that can be consistent with the input spectra. As such, emulators allow finding statistically the non-linear relationships between the low resolution and high spectral resolution data, and thus can learn the most common patterns in the dataset.
In this work, we trained an emulator using two coincident reflectance subsets, consisting of a S2 multi-spectral spaceborne image as input, and a HyPlant airborne hyperspectral sensor image as output. The images were recorded on 26th and 27th of June 2018, respectively, and were acquired around the city of Jülich in the western part of Germany. The S2 image provides multispectral information using 13 bands in the range of 430 to 2280 nm. The used image was acquired by the MSI sensor of S2A and provided bottom-of-atmosphere (BOA) reflectance data (L2A). The influence in the performance of choosing spatial resampling to 10 or 20 m resolution and the exclusion of Aerosol and Water Vapour bands have been assessed. The HyPlant DUAL image provides contiguous spectral information from 402 to 2356 nm with a spectral resolution of 3-10 nm in the VIS/NIR and 10 nm in the SWIR spectral range. We used the BOA reflectance product of 9 HyPlant flight lines mosaiced to one image and compared it with the S2 scene.
Regarding the role of machine learning (ML) algorithms to serve as an emulator, kernel-based ML methods have proven to perform accurate and fast when trained with few samples. Instead, when many samples are introduced into training, kernel-based ML methods are computationally costly, while neural networks (NN) keep performing fast and accurately with increasing samples. For this reason, given a dense random sampling over the S2 image and corresponding HyPlant data as output, evaluating multiple ML algorithms led to superior accuracies achieved by NN in emulating hyperspectral data. Using the NN model, a final emulator has been developed that converts an S2 image into a hyperspectral S2-like image. As such, the texture of S2 has been preserved while the hyperspectral datacube has the spectral characteristics and quality of HyPlant data. Following, the S2-like synthetic hyperspectral image has been successfully validated against a reference dataset obtained by HyPlant with a R2 of 0.85 and NRMSE of 3.45%. We observed that the emulator is able to generate S2-like hyperspectral images with high accuracy including spectral ranges not covered by S2. Finally, it must be remarked that emulated images do not replace hyperspectral image data recorded by spaceborne sensors. However, they can serve as synthetic test data in the preparation of future imaging spectroscopy missions such as FLEX or CHIME. Furthermore, the emulation technique opens the door to fuse high spatial resolution multi-spectral images with high spectral resolution hyperspectral images.
O’Hagan, A. Bayesian analysis of computer code outputs: A tutorial. Reliab. Eng. Syst. Saf. 2006, 91, 1290–1300.
Verrelst, J.; Sabater, N.; Rivera, J.P.; Muñoz Marí, J.; Vicent, J.; Camps-Valls, G.; Moreno, J. Emulation of Leaf, Canopy and Atmosphere Radiative Transfer Models for Fast Global Sensitivity Analysis. Remote Sens. 2016, 8, 673.
Verrelst, J.; Rivera Caicedo, J.P.; Vicent, J.; Morcillo Pallarés, P.; Moreno, J. Approximating Empirical Surface Reflectance Data through Emulation: Opportunities for Synthetic Scene Generation. Remote Sens. 2019, 11, 157.
Earth’s atmosphere and surface is undergoing rapid changes due to urbanization, industrialization and globalization. Environmental problems such as desertification, soil depletion, water shortages, greenhouse gas (GHG) emissions warming the atmosphere, are increasingly significant and troubling consequences of human activities. UNEP forecast that under current policy, GHG emission will reach 60 gigatons CO2 per year by 2030. On the COP26 António Guterres said, what “We must accelerate climate action to keep alive the goal of limiting global temperature rise to 1.5 degrees”, and it is time to go “into emergency mode”.
Before today a total of 33 relevant satellite missions with spectrometers like SAM, SAGE, GRILL, ATMOS, HALOE, POAM, GOMOS, MAESTRO was used for GHG monitoring capabilities from space underpinning the dynamic analysis and forecasting by solving ill-posed inverse problems on the bases GHG atmospheric measurement.
Most of the practice science problems of atmospheric measurement emission gases are formally reduced to the Fredholm integral equations of the first kind.
When numerically solving the Fredholm integral equation of a first kind, with which ill-posed inverse problems are associated, most of the problems of forecasting the dynamics of greenhouse gas emission, and other problems of forecasting the dynamics of atmospheric gases are reduced to solving a system of algebraic equations.
In most cases, direct calculation of the kernel function of the Fredholm integral equation is impossible due to the lack of information on the parameters of the interaction of the spectrometer with the atmospheric measurement environment.
As a consequence, the algorithm for solving the inverse ill-posed problem associated with forecasting the dynamics of greenhouse gas emission can be based on the use of machine learning and artificial intelligence methods.
Moreover, taking into account the stochastic nature of the behavior of both atmospheric parameters and the measurement errors of spectrometers in inverse ill-posed problems, it is necessary to search not a single solution, but the distribution of the probabilities of solutions.
Machine learning (ML) regression is a frequently used approach for the retrieval of biophysical vegetation properties from spectral data. ML regression is often preferred in this context over conventional multiple linear regression models because ML approaches are able to cope with one or more of the following challenges that impair conventional regression models:
(1) Spectral data are highly inter-correlated. This strong correlation between bands or wavelengths violates the assumption in linear regression that the predictor variables are statistically independent and impairs the interpretation of regression coefficients.
(2) The relation between spectral data and the response variable is non-linear and not well described by linear models.
(3) The relation between individual spectral bands and the response variable is rather weak and many bands are necessary to build an adequate prediction model.
In addition, some ML approaches promise to require only a comparatively small sample size for achieving robust model results. This makes ML-based approaches suitable for data sets that are asymmetric in terms of containing fewer samples than spectral bands. In practice, the sample size for training data in remote sensing studies targeting biophysical variables is most often determined by availability and is frequently limited to n < 100. The practice of using rather small sample sizes and the promise of ML to require only a few observations for sufficient model training is encountered by reports that these techniques are prone to over-fitting. So far, no systematic analysis of the effects of sample size on ML regression performance in biophysical property retrieval is available. The advent of spectral data archives such as the ecosis repository (https://ecosis.org/) enables such an analysis. This study hence addresses the question ‘How does the training sample size affect the model performance in machine-learning based biophysical trait retrieval?’
For a comprehensive analysis, two parameters were selected that are physically linked to the spectral signal of vegetation and are frequently addressed at the leaf and at the canopy level: leaf chlorophyll (LC, two data sets at the leaf and two at the canopy level) and leaf mass per area (LMA, seven and two data sets, respectively). LC has a very distinct influence on the spectral signal due to its pronounced absorption in the visible region and shows a strong statistical relation to a few spectral bands. LMA has a rather broad and unspecific absorption in the NIR and SWIR range and shows a weaker relation to the spectral signal in individual bands. Due to the differences in their spectral absorption features, these two parameters were expected to behave differently in regression analysis.
With these data, three different ML regression techniques were tested for effects of training sample size on their performance: Partial Least Squares regression (PLSR), Random Forest regression (RFR) and Support Vector Machine regression (SVMR). For each data set and regression technique, the target variable was repeatedly modeled with a successively growing training sample size. Trends in the model performances were identified and analyzed.
The results show that the performance of ML regression techniques clearly depends on the sample size of the training data. On both leaf and canopy level, for both LC and LMA, as well as for all three regression techniques, an increase in model performance with a growing sample size was observed. This increase is, however, non-linear and tends to saturate. The saturation in the validation fits emerges for training sample sizes larger than ncal = 100 to ncal = 150. While it may be possible to build a model with an adequate fit and robustness even with a rather small training data set, the risk of a weak performance, over-fitted and thus not transferable model and erratic band importance metrics are increasing considerably.
Object detection, classification and semantic segmentation are ubiquitous and fundamental tasks in extracting, interpreting and understanding the information acquired by satellite imagery. The suitable spatial resolution of the imagery mainly depends on the application of interest, e.g. agricultural activity monitoring, land cover mapping, building detection. Applications for locating and classifying man-made objects, such as buildings, roads, aeroplanes, ships, and cars typically require Very High Resolution (VHR) imagery, with spatial resolution ranging approximately from 0.3 to 5m. However, such VHR imagery is generally proprietary and commercially available only at a high cost. This prevents its uptake from the wider community, in particular when analysis at large scale is desired. HIECTOR (HIErarchical deteCTOR) tackles the problem of efficiently scaling object detection in satellite imagery to large areas by leveraging the sparsity of such objects over the considered area-of-interest (AOI). In particular, this work proposes a hierarchical method for detection of man-made objects, using multiple satellite image sources at different spatial resolutions. The detection is carried out in a hierarchical fashion, starting at the lowest resolution and proceeding to the highest. Detections at each stage of the pyramid are used to request imagery and apply the detection at the next higher resolution, therefore reducing the amount of data required and processed. In an ideal scenario, where objects of interest typically cover only a very small fraction of the whole AOI, the hierarchical method would use a significant lower amount of VHR imagery. We investigate how the accuracy and cost efficiency of the proposed method compares to a method that uses VHR imagery only, and report on the influence that detections at each pyramidal stage have on the final result. We evaluate the HIECTOR for the task of building detection at the country level, and frame it as object detection, meaning that a bounding box is estimated around each object of interest. The same criteria could be however applied to different objects or land covers, and a different task such as semantic segmentation can replace the detection task.
For the detection of buildings, HIECTOR is demonstrated using the following data sources: a Global Mosaic [1] of Sentinel-2 imagery at 120m spatial resolution, Sentinel-2 imagery at 10m spatial resolution, Airbus SPOT imagery pan-sharpened to 1.5m resolution and Airbus Pleiades imagery pan-sharpened to 0.5m resolution. Sentinel-2 imagery and the derived mosaic are openly available, making their use very cost efficient. Given that single buildings are not discernible at 120m and 10m resolutions, we re-formulate the task differently for such levels of the pyramid. Using the Sentinel-2 mosaic at 120m resolution, we regress the fraction of buildings at the pixel-level, and threshold the estimated fraction at a given value to get predictions of built-up areas. Such threshold is optimised to minimise the amount of detected area and of missed detections, while maximising the true detections. Once the build-up area is detected on the 120m mosaic, Sentinel-2 imagery at 10m resolution is requested, and an object detection algorithm is applied to the imagery to refine the estimation of build-up areas. In this case, a bounding box does not describe a single building but rather a collection of buildings. The estimated bounding boxes at 10m are joined and the resulting polygon is used to further request SPOT imagery at the pan-sharpened spatial resolution of 1.5m. In the case of SPOT imagery, given the higher spatial resolution, one bounding box is estimated for each building. As a final step, predictions are improved in areas with low confidence by requesting Airbus Pleiades imagery at the pan-sharpened 0.5m resolution. Within this framework, the VHR imagery at 0.5m resolution is requested only for a small percentage of the entire AOI, greatly reducing costs.
The Single-Stage Rotation-Decoupled Detector (SSRDD) algorithm proposed in [2] has been adapted and used for building detection in Sentinel-2 10m images, and in Airbus SPOT and Pleiades imagery. The Sentinel Hub service [3] is used by HIECTOR to request the imagery sources on the specified polygons determined at each level of the pyramid, allowing to request, access and process specific sub-parts of the AOI. Within this talk we will present an in-depth analysis of the experiments carried out to train, evaluate and deploy HIECTOR to a country-level AOI. In particular, analysis of the trade-off between detection accuracy and cost savings will be presented and discussed.
References:
[1] Sentinel-2 L2A 120m Mosaic, https://collections.sentinel-hub.com/sentinel-s2-l2a-mosaic-120/
[2] Zhong B., and Ao K. Single-Stage Rotation-Decoupled Detector for Oriented Object, Remote Sens. *2020*, 12(19), 3262; https://doi.org/10.3390/rs12193262
[3] Sentinel Hub, https://www.sentinel-hub.com
The last few years have seen an ever growing interest in weather predictions on sub-seasonal time scales ranging from 2 weeks to about 2 months. By forecasting aggregated weather statistics, such as weekly precipitation, it has indeed become possible to overcome the theoretical predictability limit of 2 weeks (Lorenz 1963; F. Zhang et al. 2019), bringing life to time scales which historically have been known as the “predictability desert”. The growing success at these time scales is largely due to the identification of weather and climate processes providing sub-seasonal predictability, such as the Madden-Julian Oscillation (MJO) (C. Zhang 2013) and anomaly patterns of global sea surface temperature (SST) [Woolnough 2007, Saravanan & Chang 2019], sea surface salinity (Li et al. 2016; Chen et al. 2019; Rathore et al. 2021), soil moisture (Koster et al. 2010) and snow cover (Lin and Wu 2011). Although much has been gained by these studies, a comprehensive analysis of all potential predictors and their relative relevance to forecast sub-seasonal rainfall is still missing.
At the same time, data-driven machine learning (ML) models have proved to be excellent candidates to tackle two common challenges in weather forecasting: (i) resolving the non-linear relationships inherent to the chaotic climate system and (ii) handling the steadily growing amounts of Earth observational data. Not surprisingly, a variety of studies have already displayed the potential of ML models to improve the state-of-the-art dynamical weather prediction models currently in use for sub-seasonal predictions, in particular for temperatures (Peng et al. 2020; Buchmann and DelSole 2021) , precipitation (Scheuerer et al. 2020) and the MJO (Kim et al. 2021; Silini, Barreiro, and Masoller 2021). It seems therefore inevitable that the future of sub-seasonal prediction lies in the combination of both the dynamical, process-based and the statistical, data-driven approach (Cohen et al. 2019).
In the advent of this new age of combined Neural Earth System Modeling (Irrgang et al. 2021), we want to provide insight and guidance for future studies (i) to what extent large-scale teleconnections on the sub-seasonal scale can be resolved by purely data-driven models and (ii) what the relative contributions of the individual large-scale predictors are to make a skillful forecast. To this end, we build neural networks to predict sub-seasonal precipitation based on a variety of large-scale predictors derived from oceanic, atmospheric and terrestrial sources. As a second step, we apply layer-wise relevance propagation (Bach et al. 2015) to examine the relative importance of different climate modes and processes in skillful forecasts.
Preliminary results show that the skill of our data-driven ML approach is comparable to state-of-the-art dynamical models suggesting that current operational models are able to correctly model large-scale teleconnections within the climate system. The ML model achieves highest skills over the tropical Pacific, the Maritime Continent and the Caribbean Sea (Fig. 1), in agreement with dynamical models. By investigating the relative importance of those large-scale predictors for skillful predictions, we find that the MJO and processes associated to SST anomalies like the El Niño-Southern Oscillation, the Pacific decadal oscillation and the Atlantic meridional mode all play an important role for individual regions along the tropics.
Additional material
Figure 1 | Forecast skill of the ML model as represented by the Brier skill score (BSS) calculated with respect to climatology. Red color shadings show regions where the ML model performs better, while blue color shadings indicate worse skill than climatology. The BSS was calculated as an average over the period from 2015 to 2020 using weekly forecasts totalling 310 individual samples, which were set aside before the training process as a test set.
References
Bach, Sebastian, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.” PloS One 10 (7): e0130140.
Buchmann, Paul, and Timothy DelSole. 2021. “Week 3-4 Prediction of Wintertime CONUS Temperature Using Machine Learning Techniques.” Frontiers in Climate 3: 81.
Chen, B, H Qin, G Chen, and H Xue. 2019. “Ocean Salinity as a Precursor of Summer Rainfall over the East Asian Monsoon Region.” Journal of Climate 32 (17): 5659–76. https://doi.org/10.1175/JCLI-D-18-0756.1.
Cohen, Judah, Dim Coumou, Jessica Hwang, Lester Mackey, Paulo Orenstein, Sonja Totz, and Eli Tziperman. 2019. “S2S Reboot: An Argument for Greater Inclusion of Machine Learning in Subseasonal to Seasonal Forecasts.” WIREs Climate Change 10 (2): e00567. https://doi.org/10.1002/wcc.567.
Irrgang, Christopher, Niklas Boers, Maike Sonnewald, Elizabeth A. Barnes, Christopher Kadow, Joanna Staneva, and Jan Saynisch-Wagner. 2021. “Will Artificial Intelligence Supersede Earth System and Climate Models?,” January. https://arxiv.org/abs/2101.09126v1.
Kim, H., Y. G. Ham, Y. S. Joo, and S. W. Son. 2021. “Deep Learning for Bias Correction of MJO Prediction.” Nature Communications 12 (1): 3087. https://doi.org/10.1038/s41467-021-23406-3.
Koster, R. D., S. P. P. Mahanama, T. J. Yamada, Gianpaolo Balsamo, A. A. Berg, M. Boisserie, P. A. Dirmeyer, et al. 2010. “Contribution of Land Surface Initialization to Subseasonal Forecast Skill: First Results from a Multi-Model Experiment.” Geophysical Research Letters 37 (2). https://doi.org/10.1029/2009GL041677.
Li, L, R Schmitt, CC Ummenhofer, and KB Karnauskas. 2016. “North Atlantic Salinity as a Predictor of Sahel Rainfall.” Science Advances 2 (5): e1501588. https://doi.org/10.1126/sciadv.1501588.
Lin, Hai, and Zhiwei Wu. 2011. “Contribution of the Autumn Tibetan Plateau Snow Cover to Seasonal Prediction of North American Winter Temperature.” Journal of Climate 24 (11): 2801–13.
Lorenz, Edward N. 1963. “Deterministic Nonperiodic Flow.” Journal of Atmospheric Sciences 20 (2): 130–41.
Peng, Ting, Xiefei Zhi, Yan Ji, Luying Ji, and Ye Tian. 2020. “Prediction Skill of Extended Range 2-m Maximum Air Temperature Probabilistic Forecasts Using Machine Learning Post-Processing Methods.” Atmosphere 11 (8): 823.
Rathore, Saurabh, Nathaniel L. Bindoff, Caroline C. Ummenhofer, Helen E. Phillips, Ming Feng, and Mayank Mishra. 2021. “Improving Australian Rainfall Prediction Using Sea Surface Salinity.” Journal of Climate 1 (aop): 1–56. https://doi.org/10.1175/JCLI-D-20-0625.1.
Scheuerer, Michael, Matthew B. Switanek, Rochelle P. Worsnop, and Thomas M. Hamill. 2020. “Using Artificial Neural Networks for Generating Probabilistic Subseasonal Precipitation Forecasts over California.” Monthly Weather Review 148 (8): 3489–3506. https://doi.org/10.1175/MWR-D-20-0096.1.
Silini, Riccardo, Marcelo Barreiro, and Cristina Masoller. 2021. “Machine Learning Prediction of the Madden-Julian Oscillation.” Earth and Space Science Open Archive ESSOAr.
Zhang, Chidong. 2013. “Madden–Julian Oscillation: Bridging Weather and Climate.” Bulletin of the American Meteorological Society 94 (12): 1849–70.
Zhang, Fuqing, Y. Qiang Sun, Linus Magnusson, Roberto Buizza, Shian-Jiann Lin, Jan-Huey Chen, and Kerry Emanuel. 2019. “What Is the Predictability Limit of Midlatitude Weather?” Journal of the Atmospheric Sciences 76 (4): 1077–91. https://doi.org/10.1175/JAS-D-18-0269.1.
Most volcano observatories are nowadays heavily reliant on satellite data to provide time-critical hazard information. Volcanic hazards refers to any potentially dangerous volcanic process that can threaten people and infrastructure, such as lava flows and pyroclastic flows. During an explosive eruption, a major hazard to population can be represented by the ejection in the atmosphere of gases and ash, with the consequently creation of a volcanic plume, which can compromise aviation safety. Satellite remote sensing of volcanoes is very useful because it can provide data for large areas with a variety of modalities ranging from visible to infrared and radar. Satellite data suitable to monitor in near-real time the activity of a volcano are those acquired by the sensor Spinning Enhanced Visible and InfraRed Imager (SEVIRI), on board Meteosat Second Generation (MSG) geostationary satellite. SEVIRI has high temporal resolution (one image every 15 minutes) and good spectral resolution (12 spectral bands, including Visible, Near-Infrared and Infrared channels), providing a consistent amount of data exploitable for monitoring the eruptive activity of volcanoes. For example, Middle-Infrared (MIR) channels can be used to detect and quantify the thermal anomalies, whereas Thermal Infrared (TIR) bands can be adopted to observe and study volcanic clouds. Here, we propose a platform that exploits SEVIRI images to monitor in near real time the volcanic activity. In particular, we implemented an algorithm that detects the presence of volcanic thermal anomalies and, if they occur, measures the radiant heat flux to quantify these anomalies, checks if a volcanic plume appears and, consequently, uses machine learning algorithms to track the advancement of the plume and to retrieve its components (Figure 1).
SEVIRI data are downloaded automatically from the EUMETSAT DataStore using specific Python APIs; users can use the graphic interface of the platform to choose the time period of the images to download and to define the coordinates of the investigated region of interest. Once the SEVIRI images are downloaded, they are processed to detect the possible presence of volcanic thermal anomalies and, if so, the algorithm for the quantification of these anomalies and for the detection of a volcanic plume is started. Volcanic thermal anomalies are quantified by using a parameter called Fire Radiative Power (FRP) and, for each fire pixel detected, the FRP is calculated using the Wooster’s MIR radiance approach. The detection of a volcanic plume is performed exploiting the TIR bands of SEVIRI images: the brightness temperature difference (BTD) between bands at 10.8 µm and at 12.0 µm highlights the presence of thin volcanic ash, whereas the difference between bands at 10.8 µm and 8.7 µm emphasizes the presence of SO2. Starting from this consideration, a machine learning (ML) algorithm was developed to detect volcanic plumes and to retrieve their content of ash and SO2. This algorithm exploits manually labeled image regions to train a classifier that is able to recognize the plume and plume patches corresponding to ash, SO2 and mixing of ash and SO2. The learned classifier has the ability to generalize this approach and to classify automatically new images and all the newly emitted volcanic plumes. This near-real time approach for volcanic eruptions monitoring is daily applied to assess the status of Mt. Etna (Italy), but it can be applied successfully also to any other volcano covered by SEVIRI, just setting the correspondent coordinates in the graphic interface of the platform.
Forests have a wide range of social-ecological functions, such as storing carbon, preventing natural hazards, and providing food and shelters. Monitoring the status of forests not only deepens our understanding of climate change and ecosystems, but also helps guiding the formulation of ecological protection policies. Remote sensing based analyses of forests are typically limited to forest cover, and most of our knowledge of forests mainly comes from forest inventories, where tree density, canopy cover, species, height, carbon stock and other indicators are recorded. The inventories are conventionally established by manually collecting in-situ measurements, which can be time-consuming, labor-intensive and difficult to scale up. Here we present an automatic and scalable tree inventory pipeline based on publicly available aerial images from Denmark and deep neural networks, enabling individual-tree-level canopy segmentation, counting, and height estimation within different kinds of forests. The canopy segmentation and counting tasks are solved in a multitasking manner, where a convolutional neural network is trained to jointly predict a segmentation mask and a density map which sums up to the total tree count for a given image. Another network trained with LiDAR-derived height maps estimates per-pixel canopy height from aerial photos, which, when combined subsequently with the canopy segmentation masks, allows for per-tree height mapping. The multitasking network achieves a segmentation dice coefficient of 0.755 on the testing set with 3904 manually annotated trees and a predicted total count of 3869 (r2 = 0.84). Compared with independent LiDAR reference heights, the height estimation model achieves a per-pixel mean absolute error (MAE) of 2.6 m on the testing set and a per-tree MAE of 3.0 m when assigning tree height with the maximum height estimate within each predicted canopy. The models perform robustly over diverse landscapes including dense forests (coniferous and broad-leaved), open fields, and urban areas. We further verify the scalability of the framework by detecting 312 million individual trees across Denmark.
Complex numerical weather prediction (NWP) models are deployed operationally to predict the future state of the atmosphere. While these models solve numerically a system of partial differential equations based on physical laws, they are computationally very expensive. Recently, the potential of deep neural networks has been explored in a couple of scientific studies to generate bespoken weather forecasts inspired by the success of video frame prediction models in computer vision. In our study, we explore the deep learning network with the video prediction approach for weather forecasts and provide two case studies as proof-of-concept.
In the first study, we focus on the diurnal cycle of 2m temperature forecasting A ConvLSTM, and an advanced generative network, the Stochastic Adversarial Video Prediction (SAVP), are applied to forecast the 2 m temperature for the next 12 hours over Europe. Results show that SAVP is significantly superior to the ConvLSTM model in terms of several evaluation metrics. Our study also investigates the sensitivity to the input data in terms of selected predictors, domain sizes and amounts of training samples. The results demonstrate that the candidate predictors, i.e. the total cloud cover and the 850 hPa temperature enhance the forecast quality and the model can also benefit from a larger spatial domain. By contrast, the effect of varying training datasets between eight to 11 years is rather small. Furthermore, we reveal a small trade-off between the MSE and the spatial variability of the forecasts when tuning the weight of the L1-loss component in the SAVP model.
In the second study, we explore a bespokenGAN-based architecture for precipitation nowcasting. The prediction of precipitation patterns at high spatio-temporal resolution up to two hours ahead, also known as precipitation nowcasting, is of great relevance in weather-dependent decision-making and early warning systems. Here, we develop a novel method,named Convolutional Long-short term memory Generative Adversarial Network(CLGAN), to improve the nowcasting skills of heavy rain events with deep neural networks. The model constitutes a GAN architecture whose generator is built upon an u-shaped encoder-decoder network (U-Net) equipped with recurrent LSTM cells to capture spatio-temporal features. A comprehensive comparison between CLGAN, and baseline models optical flow model DenseRotation as well as the advanced video prediction model PredRNN-v2 is performed. We show that CLGAN outperforms in terms of point-by-point metrics as well as scores for dichotomous events and object-based diagnostics. The results encourage future work based on the proposed CLGAN architecture to further improve the accuracy of precipitation nowcasting systems.
In the AI-Cube project datacube fusion and AI-based analytics will be integrated, demonstrated in several real-life application scenarios, and evaluated on a federation of DIASs and further high-volume EO / geo data offerings.
Starting point is the observation that both Machine Learning (ML) and datacube query languages share the same basis, Tensor Algebra or – more generally – Linear Algebra. This seems to provide a good basis for combining both methods in a way that datacubes can be leveraged by ML better than scene-based methods. The expected benefits include simplification of ML code, enhanced scalability, and novel ways of evaluating spatio-temporal data.
AI-Cube approaches this from both sides: adjusting ML to datacubes and enhancing datacubes with specific operational support for ML model training and application. As to the first part, the project will develop multi-cross-modal AI methods that:
• effectively learn the common representations for the heterogeneous EO data by preserving the semantic discrimination and modality invariance simultaneously in an end-to-end manner.
• consist of intermodality similarity-preserving learning and semantic label-preserving learning modules based on different types of loss functions simultaneously.
• include an inter-modal invariance triplet loss and inter-modal pairwise loss functions in the framework of the cross-modal retrieval problems.
The Big Data aspect is underlined by tapping into the BigEarth.Net collection of 590,000 labelled Sentinel-1 / Sentinel-2 patch pairs for versatile model training. These models will then be used on the 30+ PB of Sentinel datacubes offered by rasdaman on Mundi, Creodias, and further members of the EarthServer datacube federation.
From the database perspective, novel operators will be added to the query language to embed AI into datacube query languages like SQL/MDA and OGC WCPS. Also the models themselves will be stored and handled as datacubes.
Goal is to support scenarios like the following: User selects a topic (such as specific crop types, specific forest types, burnt forest areas). System determines, through a combined analysis of various large-scale data sources, a list of regions showing the criterion selected. User gets this visualized directly or continues analysing, possibly combining with further data sources. Real-life application scenarios will be exercised in the DIASs of the EarthServer federation, doing both single datacube analytics and distributed datacube fusion.
The consortium consists of Jacobs University as coordinator, TU Berlin, and rasdaman GmbH. AI-Cube has commenced in Fall 2021, and first results will be presented at the symposium.
Acknowledgement
This work is supported by the German Ministry of Economics and Energy.
Forests play a major role in the global carbon cycle and the mitigation of climate change effects. Gross Primary Production (GPP), the gross uptake of CO₂, is a key variable that needs to be accurately monitored to understand terrestrial carbon dynamics. Even though GPP can be derived from Eddy Covariance (EC) measurements at ecosystem scale (e.g., the FLUXNET network), the corresponding monitoring sites are sparse and unevenly distributed throughout the world. Data-driven techniques are among the most used methods to estimate GPP and its spatio-temporal fluctuations for locations where local measurements are unavailable. These methods entail developing an empirical model based on ground-truth GPP measurements and have been a primary tool for upscaling the GPP derived from EC measurements by using traditional Machine Learning methods with satellite imagery and meteorological data as inputs. Current data-driven carbon flux models utilize traditional models like Linear Regression, Random Forests, Support Vector Machines or Gaussian Processes, while Deep Learning approaches that leverage the temporal patterns of predictor variables are underutilised. Short- and long-term dependencies on previous ecosystem states are complex and should be addressed when modeling GPP. These temporally lagged dependencies of vegetation states, hereinafter memory effects, can be considered in traditional Machine Learning approaches, but must be encoded in hand-designed variables that lose their sequential structure. Here we show that the estimation of GPP in forests can be improved by considering memory effects using Sentinel-2 imagery and Long Short Term Memory (LSTM) architectures. We found that the accuracy of the model increased by considering the long-range correlations in time series of Sentinel-2 satellite imagery, outperforming single state models. Furthermore, the additional information contributed from Sentinel-2, such as its high spatial resolution (10-60 m), as well as the vegetation reflectance in the Red Edge bands (703-783 nm), boosted the accuracy of the model. Our results demonstrate that long-term correlations are a key factor for GPP estimation in forests. Moreover, the Red Edge reflectance enhances the sensitivity of the model to photosynthetic activity and the high spatial resolution of the imagery allows to account for local spatial patterns. These results imply that novel data-driven models should account for long-term correlations in remote sensing data. Additionally, the information provided by Sentinel-2 imagery demonstrated to increase the accuracy of the model and further investigation should be carried out. For example, local spatial patterns (e.g., tree mortality or deforestation in certain spots in the image) can be further exploited by Deep Learning methods such as Convolutional Neural Networks (CNN).
Relevance extraction plays an important role and essential step in various image processing applications such as image classification, active learning, sample labeling, and content-based image retrieval (CBIR) tasks. The most major part of CBIR is querying image contents includes two steps. The first is feature extraction, which creates a set of features for describing and characterizing images, and the second is relevance retrieval, which looks for and retrieves images that are similar to the query image. It's worth noting that relevance extraction has a significant impact on image retrieval performance.
Support vector data description (SVDD) is the well-known and traditional approach for one-class classification or anomaly detection. The main idea of SVDD is mapping samples of the class of interest into a hypersphere so that samples of the class of interest fall inside of this hypersphere and samples of other classes fall out of it. Integrating state-of-the-art deep learning (DL) algorithms with conventional modeling is essential to solving complex science and engineering problems. In the last decayed DL got a lot of attention from various applications. Using a deep neural network (DNN) provides high-level features extraction. LeNet, a well-known DNN in computer vision was used to map samples from the input space into the features latent space in this study. The objective of the DNN is to minimize the Euclidian distance between the center of the hypersphere with the output of the given training samples.
In order to compare the method with state-of-the-art, we employed benchmarked EuroSAT dataset that was captured by the Sentinel-2 satellite. The dataset includes 27000 samples of multispectral Sentinel-2 images within 10 different classes. Therefore, there are 10 setups in which of them the class of interest is different. For the training stage of DNN, we use only one class as a class of interest. In other words, the DNN does not see samples of other classes. For testing the network, all samples of the dataset are used to measure the classes of interest and other classes. The trained DNN predicts the score for each sample of the test set. The score measures the distance of output of the network from the center of the hypersphere. The lower distance represents the relevant samples to the class of interest and the highest distance corresponds to the most ambiguous sample of the dataset.
Forest managers are increasingly interested in monitoring forest species in the context of conservation and land use planning. Field monitoring of the dense tropical forests is an arduous task, so remote sensing of tree species in these regions poses a great advantage. Hyperspectral imaging (HSI) offers a rich source of information, comprising reflectance measurements in hundreds of contiguous bands, making it valuable for image classification. Many pixel-based algorithms have been used in image classification, such as support vector machines (Melgani and Bruzzone, 2004), neural networks (Ratle et al., 2010), open learning (Li et al., 2011), to name a few. However, these approaches are strongly dependent on the dimensionality of the data and require many more labelled samples than are typically available from field surveys. The latter is usually challenging to obtain as they are based on data manually collected data on the ground.
To circumvent the problem of having few labels, in this study, we show how a semi-supervised spectral graph learning (SGL) algorithm (developed by Kotzagiannidis and Schönlieb in 2021 on standard HSI dataset), in conjunction with superpixel clustering, can be used for forest species classification. This new approach is based on three main steps: 1) the SLIC segmentation algorithm that creates superpixels considering both the size and resolution of the HSI image 2) using the label propagation on nearest neighbouring superpixels an initial smooth graph is learnt based on the features extracted in the image, and 3) the learnt graph is updated utilizing penalizing functions for classes not belonging to the class, followed by label propagation and the final class assignment. We used this new approach to classify the tropical forest species from airborne hyperspectral imagery collected by NASA’s AVIRIS sensor in the Shivamogga forested region of southern India. In the surveyed area we labelled tree crowns of 31 tree species, of which three species - Terminalia tomentosa, Terminalia bellirica and Anogeissus latifolia - were labelled over ten times. It is to note that only 5% of the data under consideration had labels, still, the SGL method improved in performance (2%) compared to the linear graph learning (Sellars et al.,2020) but substantially better than Support Vector Machine algorithm (11%), Local Global Consistency (9%), based on the Kappa coefficients.
The main reason for the better performance of SGL over other approaches is the incorporation of multiple features into the updatable graph. This approach refines the graph to the extent that it can capture the complex dependencies in the HSI data and ultimately provide an improved classification performance. With the method now tested in complex mixed tropical forests using AVIRIS hyperspectral images, this state-of-art algorithm looks promising for application in forests in other regions of the world.
References:
Kotzagiannidis MS, Schonlieb CB. Semi-Supervised Superpixel-Based Multi-Feature Graph Learning for Hyperspectral Image Data. IEEE Trans Geosci Remote Sens 2021. https://doi.org/10.1109/TGRS.2021.3112298.
Li J, Bioucas-Dias JM, Plaza A. Hyperspectral image segmentation using a new Bayesian approach with active learning. IEEE Trans Geosci Remote Sens 2011;49:3947–60.
Melgani F, Bruzzone L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans Geosci Remote Sens 2004;42:1778–90.
Ratle F, Camps-Valls G, Weston J. Semisupervised neural networks for efficient hyperspectral image classification. IEEE Trans Geosci Remote Sens 2010;48:2271–82.
Sellars P, Aviles-Rivero AI, Schonlieb CB. Superpixel Contracted Graph-Based Learning for Hyperspectral Image Classification. IEEE Trans Geosci Remote Sens 2020;58:4180–93. https://doi.org/10.1109/TGRS.2019.2961599.
In recent years, Artificial Intelligence (AI), in particular Machine Learning (ML) algorithms, have demonstrated to be a valuable instrument for Earth Observation (EO) applications designed to retrieve information from Remote Sensing (RS) data. ML-based techniques have made a notable advancement in Earth Observation applications so that the acronym AI4EO (Artificial Intelligence for Earth Observation) has caught on in recent studies, publications and initiatives. The vast amount of available data has led to a change from the traditional geospatial data analysis approaches. Indeed, ML techniques are often used to transform data into valuable information representing real-world phenomena. Nevertheless, the lack or shortage of labelled data and ground truth is one of the most critical obstacles to applying ML supervised algorithms. Indeed, the feasibility of labelled data generation varies depending on the EO application type. Specifically, data labelling can be performed directly by the EO data users for object detection and land cover applications by manual or automatic mapping, while geophysical parameters labelling is challenging to perform and in-situ measurements, in most cases, are limited and hard to retrieve.
Moreover, the risk that occurs when data-driven approaches such as ML models are adopted is that it becomes difficult to understand the intrinsic relations between the input variables and the physical meaning behind the mapping criteria taking place inside the Artificial Neural Networks (ANN). To avoid such a “black-box” approach, the proposed work offers the chance to synergically adopt electromagnetic data modelling and ML models design and development.
In this regard, during the last 30-40 years, scientists and researchers have proposed and developed several electromagnetic models based on the radiative transfer theory, suitable for large dataset generation for AI applications. In particular, electromagnetic models allow a dataset collection, simulating radar acquisitions (for different sensor configurations, e.g., signal frequency, polarization, and incidence angle), which would be more laborious and time-consuming to obtain with real data (i.e., satellite measurements).
Particularly, the Tor Vergata model, developed by Ferrazzoli et al. [1], has been employed for simulating the radar backscatter coefficients for different signal frequencies and polarizations. It is based on the radiative transfer theory applied to discrete dielectric scatterers of simple shapes: cylinders (able to model trunks, branches and stalks) and disks (to model leaves). It applies the “Matrix doubling” algorithm [2], which models scattering interactions (including attenuation and propagation mechanisms) of any order between the soil and the vegetation cover.
Being validated with several experimental data, in this work, the Tor Vergata model has provided the possibility of simulating a vast amount of reference data with different values of vegetation- and soil-related variables (crop biomass, plant structure and soil moisture/ roughness) and sensor configuration variables such as frequency, polarization and incidence angle. The result of those simulations consists of an extensive dataset (comprising the several soil-vegetation-sensor combinations) which has been used to train different ML models. Indeed, the scope of this work is to perform a direct analysis of the information content of the radar measurements through an extended saliency analysis of the topological links composing the artificial neural networks to extract the most significant input features (i.e., the backscatter simulations at different frequencies) for soil moisture retrieval. Besides, a quality assessment for diverse ML model architectures and hyper-parameters selection is provided to evaluate model performances and the dataset generation procedure.
Eventually, it will also be shown how the information obtained from the feature importance extraction procedure can be used for actual satellite measurements employment by assessing the sensitivity of the different wavelengths of the radar signal for each plant height. At the same time, this work intends to demonstrate that ML models can reproduce the expected physical relations depending on the different study cases by avoiding a “black-box” strategy and, on the contrary, by adopting a physics-based approach.
References
[1] Ferrazzoli, P., Guerriero, L., & Solimini, D. (1991). Numerical model of microwave backscattering and emission from terrain covered with vegetation. Appl. Comput. Electromagn. Soc. J, 6, 175-191.
[2] Bracaglia, M., Ferrazzoli, P., & Guerriero, L. (1995). A fully polarimetric multiple scattering model for crops. Remote Sensing of Environment, 54(3), 170-179.
Climate change amplifies extreme weather events. Frequency is increasing and intensifying, and the impact location is becoming more and more uncertain. Anticipation is key, and for this accurate forecasting models are urgently needed. Many downstream applications can benefit from them; from vegetation and forest management and assessment to crop yield prediction and biodiversity monitoring. Recently, Earth surface forecasting was formulated as a video prediction task for which deep learning models show excellent performance [Requena-Mesa, 2021]. Here the goal is to forecast Earth surface reflectance with a given time horizon. Predicting surface reflectance helps in detecting and anticipating anomalies and extremes. The approaches include not only the past reflectances but also ingest topography and weather variables at coarser (mesoscale) resolutions.
We are here interested in understanding rather than fitting forecasting models, and thus analyzing standard DL architectures with eXplainable AI models (XAI) [Tuia, 2021; Camps-Valls, 2021]. Our purpose is twofold: 1) to evaluate and improve the performance of existing approaches, analyzing both correct and wrong predicted samples, and 2) to explain and illustrate the output of these models in a more intelligible way for climate and Earth science researchers. In particular, we will study standard pre-trained video prediction models in EarthNet 2021 (e.g. Channel-U-Net, Autoregressive Conditional -Arcon-) [Requena-Mesa, 2021] with integrated gradients, which have already been applied to drought detection [Fernandez-Torres, 2021], or Shapley values [Castro, 2009], among other techniques. This will allow us to derive spatially explicit and temporally resolved maps of salient regions impacting the prediction at Sentinel-2 spatial resolution, as well as a ranked order of input channels and weather variables.
Evaluating and visualizing the saliency maps is an elusive, subjective task though. Besides model visualization, we will study the impacts on vegetation by looking at vegetation indices, which describe the ecosystem state and evolution. We will evaluate both the standard Normalized Difference Vegetation Index (NDVI) time series and the kernel NDVI (kNDVI), which highly correlates with vegetation photosynthetic activity, and consistently improves accuracy in monitoring key parameters, such as leaf area index, gross primary productivity, and sun-induced chlorophyll fluorescence [Camps-Valls, 2021b]. The XAI methods could serve to explain a large portion of the detected impacts in NDVI, and also to provide improved sharper maps and correlations with the kNDVI index, thus suggesting this is a more realistic parameter to monitor changes, impacts and anomalies in vegetation functioning.
References:
[Camps-Valls, 2021] Gustau Camps-Valls, Devis Tuia, Xiao Xiang Zhu, Markus Reichstein (Editors). Deep learning for the Earth Sciences: A comprehensive approach to remote sensing, climate science and geosciences, Wiley & Sons 2021
[Camps-Valls, 2021b] Camps-Valls, Gustau and Campos-Taberner, Manuel and Moreno-Martínez, Álvaro and Walther, Sophia and Duveiller, Gregory and Cescatti, Alessandro and Mahecha, Miguel D. and Muñoz-Marí, Jordi and García-Haro, Francisco Javier and Guanter, Luis and Jung, Martin and Gamon, John A. and Reichstein, Markus and Running, Steven W. A unified vegetation index for quantifying the terrestrial biosphere. Science Advances. American Association for the Advancement of Science (AAAS), Pubs. 7 (9) 2021
[Castro, 2009] Castro, J., Gómez, D., & Tejada, J. (2009). Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, 36(5), 1726-1730.
[Fernandez-Torres, 2021] Miguel-Ángel Fernández-Torres and J. Emmanuel Johnson and María Piles and Gustau Camps-Valls. Spatio-Temporal Gaussianization Flows for Extreme Event Detection. EGU General Assembly, Geophysical Research Abstracts, Online, 19-30 April 2021 Vol. 23 2021
[Requena-Mesa, 2021] Requena-Mesa, C., Benson, V., Reichstein, M., Runge, J., & Denzler, J. (2021). EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1132-1142).
[Tuia, 2021] Tuia, D. and Roscher, R. and Wegner, J.D. and Jacobs, N. and Zhu, X.X. and Camps-Valls, G. Towards a Collective Agenda on AI for Earth Science Data Analysis, IEEE Geoscience and Remote Sensing Magazine 2021
In a rapidly warming Arctic, permafrost is increasingly affected by increasing temperatures and precipitation. Currently it is underlying around 14 Mkm² of the northern Hemispheric land mass and permafrost soils store about two times more carbon than the atmosphere. Thawing permafrost soils are therefore likely to become a significant source for carbon emissions under warming climate conditions. Gradual thaw of permafrost is well understood and included in Earth System Models. However, rapid or Permafrost Region Disturbances (PRD) such as wildfires, retrogressive thaw slumps or rapid lake dynamics are widespread across the Arctic permafrost region. Due to a combination of scarce data and rapid dynamics, with process durations from hours (e.g. wildfire, lake drainage) to years (lake expansion), there is still a massive lack of knowledge about their distribution in space in time. In the rapidly warming and wetting climate they are potentially accelerating in abundance and velocity with significant implications to local and global biogeochemical cycles as well as human livelihoods in northern high latitudes. Despite their significance, these disturbances are still not thoroughly quantified in space and time and thus accounted for in Earth System Models due to the past and current lack of quantification.
Historically, remote sensing and data analysis of Arctic permafrost landscape dynamics was highly limited by data availability. The explosively expanding availability of remote sensing data over the past decade, fuelled by new satellite constellations and open data policies, opened up new opportunities for spatio-temporal high resolution analysis of PRD for the research community. This data abundance, in combination with new processing techniques (cloud computing, machine learning, deep learning, unprecedented fast data processing), led to the emergence and publication of new, publicly and freely available datasets. Such datasets include permafrost-related model-based panarctic datasets (e.g ESA Permafrost CCI Ground Temperature, Active Layer Thickness), machine- and deep-learning based remote sensing-based datasets (e.g ESA GlobPermafrost Lake Changes, Retrogressive Thaw Slumps, ArcticDEM), and synthesis data from different sources (e.g. the Boreal-Arctic Wetland and Lake Database BAWLD).
Combining these rich datasets in a data science approach and leveraging machine-learning techniques has the potential to create synergies and to create new knowledge on the spatio-temporal patterns, impacts, and key drivers of PRD. Within the framework of the ESA CCI+ Permafrost and NSF Permafrost Discovery Gateway Projects, we apply a synthesis of publicly available permafrost-related datasets of permafrost ground conditions (ALT, GT), climate reanalysis data (ERA 5), and readily available or experimental remote sensing-based datasets of permafrost region disturbances.
We will (1) analyze spatio-temporal patterns, correlations, and interconnections between different parameters, (2) retrieve the importance of potential input factors (climate, stratigraphy, permafrost) on triggering RTS using machine-learning methods (e.g. Random Forest Feature Importance) and also experimenting with more advanced deep learning methods such as LSTM to retrieve temporal inter-connections and dependencies.First analyses of the spatial patterns of lake dynamics on continental scales (Nitze et al., 2018, > 600k individual lakes) reveal enhanced lake dynamics in warm permafrost close to 0 °C. Furthermore, we found enhanced ALT thickness variability in burned sites.
With analyzing and inferring key influencing factors, we may be able to predict/model the occurrence and dynamics of permafrost region disturbances under different warming scenarios. As PRD’s are still not sufficiently accounted for in global climate models, this and follow-up analyses could help fill a significant knowledge gap in permafrost and climate research.
Rapid identification and quantification of methane emissions from point sources such as leaking oil & gas facilities can enhance our ability to reduce emissions and mitigate greenhouse warming. Hyper and multispectral satellites like WorldView-3 (WV-3) and PRISMA offer very high spatial resolution of atmospheric methane concentrations from their short-wave infrared (SWIR) bands. However, there have been few efforts to automate methane plume detection from these satellite observations using machine learning approaches. Such approaches can not only allow more rapid detection of methane leaks but also have the potential of making plume detection more robust.
In this work, we trained a deep U-Net neural network to identify methane plumes from WV-3 and PRISMA radiance data. A deep residual neural network (ResNet) model was then trained to quantify the methane concentration and emission rate of the plume. The training data for the neural networks were obtained using the Large Eddy Simulation extension of the Weather Research and Forecasting model (WRF-LES). The WRF-LES simulations included an array of wind speeds, emission rates, and atmospheric conditions. The methane plumes obtained from these simulations were then embedded into a variety of WV-3 scenes, to compose the training dataset for the neural networks. The training data labels for the U-Net model were composed of binary mask images where plume concentrations above a certain threshold were differentiated from those below. The training data for the ResNet model consisted of a continuous scale of methane concentrations but were otherwise identical to that of the U-Net model. When evaluating the U-Net model on the test dataset, we found it to be significantly more accurate than the ‘shallow’ machine learning data clustering algorithm, DBSCAN. Furthermore, both trained neural networks provide predictions of satellite images almost instantaneously, whereas the DBSCAN method required a significant amount of human attention. Thus, our neural network models provide a considerable step forward in methane plume detection in terms of both accuracy and speed.
In this presentation, we will give an overview of the process of training our deep neural network models and justify the choices made regarding the architectures of the models. This will be followed by a demonstration of the effectiveness of the models in real-world images. Finally, we will discuss the potential future implementations of our approach. This work has been done by researchers from the National Centre for Earth Observation (NCEO) based at the University of Leicester, University of Leeds, and University of Edinburgh as part of a project funded by the UK Natural Environment Research Council (NERC).
Critical Components of Strong Supervised Baselines for Building Damage Assessment in Satellite Imagery and their Limitations
Deep learning is powerful approach to solving semantic segmentation in the domains of computer vision[1] and medical image analysis[2]. Variations of encoder-decoder networks, such as the U-Net, have consistently shown strong repeatable results when trained in a supervised fashion on appropriately labelled training data. These encoder-decoder architectures and training approaches are now increasingly explored and exploited for semantic segmentation tasks in satellite image analysis. Several challenges within this field, including the xView2 Challenge[3], have been won with such approaches. However, from reading the summaries, reports, and code of high performing solutions it is frequently not entirely clear which aspects of the training, network architectures and pre- and post-processing steps are critical to obtain a strong performance. This opacity is mainly because top solutions can be somewhat over-engineered and computationally expensive in the pursuit of the small gains needed to win challenge competitions or become SOTA on standard benchmarks. This makes it difficult for practitioners to decide what to include in their systems when they solve their specific problem, but want to mimic high-performing systems subject to their own computational restrictions at training and test time.
Thus in this paper we dissect the winning solution of the xView2 challenge, a late fusion U-Net [4] network architecture, and identify its most important components when training, to perform building localization and building damage classification caused by natural disasters, and still maintain strong performance. We focus on the xView2 challenge as it has satellite images of pre and post disaster sites from a large and diverse set of global locations and disaster types and manually verified labels - qualities not abundant in the publicly available remote sensing datasets. Our results show that many of the bells and whistles of the system built such as the pre- and post-processing applied, ensembling of models with large back-bone networks and extensive data-augmentations are not necessary to obtain 90-95% of performance of the winning method. A summary of the conclusions from our experiments are:
1) the choice of loss function is critical with a carefully weighted combination of the focal and dice loss being important for stable training,
2) A U-Net architecture with a ResNet-34 backbone is sufficient for good performance.
3) Late fusion of features from the pre- and post-disaster images via an appropriately pre-trained U-Net is important.
4) A per-class weighted loss is very helpful, but optimizing the weights beyond inverse relative frequency does not yield much improvement.
We also identify a problem with the evaluation criterion of the xView2 challenge dataset. Images from the same disaster sites, both pre and post disaster, are included in both the training (and by default also any validation sets created from the training set) and test sets. Therefore the performance numbers quoted are not so meaningful for the common use case of when a disaster occurs at a site unseen during training. Currently, we have preliminary results which show that when test disaster sites are not present in the training set, performance on the unseen test site can fall by > 50% with the damage classification performance being much more affected than the building localization task. These results demonstrate that generalization of networks trained in a supervised fashion to unseen sites is still far from solved and that perhaps supervised trained networks are not the final word on semantic segmentation for real world satellite applications.
[1] Semantic Segmentation on Cityscapes test, https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes; Semantic Segmentation on PASCAL VOC 2012 test, https://paperswithcode.com/sota/semantic-segmentation-on-pascal-voc-2012
[2] Medical Image Segmentation on Medical Segmentation Decathlon, https://paperswithcode.com/sota/medical-image-segmentation-on-medical
[3] xView2: Assess Building Damage, Computer Vision for Building Damage Assessment using satellite imagery of natural disasters, https://www.xview2.org
[4] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, MICCAI 2015
Multi-temporal SAR interferometry (InSAR) estimates the displacement time series of coherent radar scatterers. Current InSAR processing approaches often assume the same deformation model for all scatterers within the area of interest. However, this assumption is often wrong, and time series need to be approached individually [1], [2].
Individual, point-wise approach for large InSAR datasets is limited by high computational demands. The additional problem is imposed by the presence of outliers and phase unwrapping errors, which directly affect the estimation quality.
This work describes the algorithm for (i) estimating and selecting the best displacement model for individual point time series and (ii) detecting outlying measurements in the time series. The InSAR measurement quality of individual scatterers varies, which affects the estimation methods. Therefore, our approach uses a priori variances obtained by the variance components estimation within geodetic InSAR processing.
We present two different approaches for outlier detection and correction in the InSAR displacement time series. The first approach uses the conventional statistical methods for individual point-wise outlier detection, such as median absolute deviation confidence intervals around the displacement model. The second approach uses machine learning principles to cluster points based on their displacement behavior as well as the temporal occurrence of outliers. Using clusters instead of individual points allows more efficient analysis of average time series per cluster and consequent cluster-wise outlier detection, correction, and time-series filtering.
The two approaches have been applied on the Sentinel-1 InSAR time series of a case study from Slovakia. The area of interest is affected by landslides with characteristic non-linear progression of the movement. Our post-processing procedure parameterized the displacement time series despite the presence of a non-linear motion, thus enabling reliable outlier detection and unwrapping error correction. The validation of the proposed approaches was performed on an existing network of corner reflectors located within the area of interest.
1] Ling Chang and Ramon F. Hanssen, „A Probabilistic Approach for InSAR Time-Series Postprocessing“, IEEE Trans. Geosci. Remote Sens., vol. 54, no. 1, Jan. 2016.
[2] Bas van de Kerkhof, Victor Pankratius, Ling Chang, Rob van Swol and Ramon F. Hanssen, „Individual Scatterer Model Learning for Satellite Interferometry“, IEEE Trans. Geosci. Remote Sens., vol. 58, no. 2, Feb. 2020.
The necessity of monitoring and expanding the existing Marine Protected Areas has led to vast and high-resolution map products which, even if they feature high accuracy, they lack information on the spatially explicit uncertainty of the habitat maps, a structural element in the agendas of policy makers and conservation managers for designation and field efforts.The target of this study is to fill the gaps in the visualization and quantification of the uncertainty of benthic habitat mapping by producing an end-to-end continuous layer using relevant training datasets.
To be more accurate, by applying a semi-automated function in the Google Earth Engine’s cloud environment we were able to estimate the spatially explicit uncertainty of a supervised benthic habitat classification product. In this study we explore and map the aleatoric uncertainty of multi-temporal data driven, per-pixel classification in four different case studies in Mozambique, Madagascar, Bahamas, and Greece, which are regions known for their immense coastal ecological value. Aleatoric uncertainty, also known as data uncertainty, is part of the information theory that seeks for the data driven random and inevitable noise under the spectrum of bayesian statistics.
We use the Sentinel 2 (S2) archive in order to investigate the adjustability and scalability of our uncertainty processor in the four aforementioned case studies. Specifically, we use biennial time series of S2 satellite images for each region of interest to produce a single, multi-band composite free of atmospheric and water column related influences. Our methodology revolves around the classification process of the mentioned composite. By calculating the marginal and conditional distribution’s divisions given the available training data, we can estimate the Expected Entropy, Mutual Information and Spatially Explicit Uncertainty of a maximum likelihood model outcome.
Expected Conditional Entropy
Predicts the overall data uncertainty of the distribution P(x,y), with x:training dataset and y:model outcome.
Mutual Information
Estimates in total and per classified class the level of independence and therefore the relation of y and x distributions.
Spatially Explicit Uncertainty
A per pixel estimation of the uncertainty of the classification.
The aim by implementing the presented workflow is to quantitatively identify and minimize the spatial residuals in large-scale coastal ecosystem accounting. Our results indicate regions and classes with high and low uncertainty that can either be used for a better selection of the training dataset or to identify, in an automated fashion, areas and habitats that are expected to feature misclassifications not highlighted by existing qualitative accuracy assessments. By doing so,we can streamline more confident, cost-effective, and targeted benthic habitat accounting and ecosystem service conservation monitoring , resulting in strengthened research and policies, globally.
Droughts, heat-waves, and in particular their co-occurrences are among the most relevant climate extremes for both ecosystem functioning and human wellbeing. A deeper process understanding is needed to soon enable an early prediction of impacts of climate extremes. Earlier work has shown that vegetation responses to large-scale climate extreme events are highly heterogeneous, with critical thresholds varying according to vegetation type, event duration, pre-exposure, and ecosystem management. However, much of our current knowledge has been derived from coarse scale downstream data products and hence remains rather anecdotal. We do not yet have a global overview of high-resolution signatures of climate extreme impacts on ecosystems. However, obtaining these signatures is a nontrivial problem, as multiple challenges remain not only in the detection of extreme event impacts across environmental conditions, but also in explaining the exact impact pathways. Extreme events may happen clustered in time or space, and interact with local environmental factors such as soil conditions. Explainable artificial intelligence methods, applied to a wide collection of consistently sampled high-resolution satellite derived data cubes during extremes should enable us to address this challenge. In the new ESA funded project DeepExtremes we will work on this challenge and build on the data cube concepts developed in the Earth System Data Lab. The project adopts a nested approach of global extreme event detection and local impact exploration and prediction by comparing a wide range of XAI methods. Our aim is to explore how to shed light into the question how climate extremes affect ecosystems globally and in near-real time. In this presentation we describe the project implementation strategy, methodological challenges, and invite the remote sensing and XAI community to join us in addressing one of the most pressing environmental challenges of the future decades.
Weather forecasts at high spatio-temporal resolution are of great relevance for industry and society. However, contemporary global NWP models deploy grids with a spacing of about 10 km which is too coarse to capture relevant variability in the presence of complex topography. To overcome the limitations of coarse-grained model output, statistical downscaling with deep neural networks is attaining increasing attention.
In this study, a powerful generative adversarial network (GAN) for downscaling the 2m temperature is presented. The generator of the GAN model is built upon a U-net architecture and furthermore equipped with a recurrent layer to obtain a temporarily coherent downscaling product. As an exemplary case study, coarsened 2m temperature fields from the ERA5 reanalysis dataset are downscaled to the same horizontal resolution (0.1°) as the Integrated Forecasting System (IFS) model which runs operationally at the European Centre for Medium-Range Weather Forecasts (ECMWF). We choose Central Europe including the Alps as a proper target region for our downscaling experiment.
Our GAN model is evaluated in terms of several evaluation metrics which measure the error on grid point-level as well as the goodness of the downscaled product in terms of the spatial variability and the produced probability distribution function. Furthermore, we demonstrate how different input quantities help the model to create an improved downscaling product. These quantities comprise dynamic variables such as wind and temperature on different pressure levels, but also static fields such as the surface elevation and the land-sea mask. Incorporating the selected input variables ensures that our neural network for downscaling is capable of capturing challenging situations with the presence of temperature inversions over complex terrain.
The results motivate further development of the deep neural network including a further increase in the spatial resolution of the target product as well as applications to other meteorological variables such as wind or precipitation.
We all agree since the 70’s that Earth Observation (EO) data is key to understand human activity and Earth changes. However two trends today are forcing us to rethink the use of EO to tackle new challenges:
- Georeferenced data sources, data quantity and quality keep increasing allowing a global and regular Earth coverage;
- AI and cloud storage allow swift fusion, analysis and dissemination of these data on online platforms.
Combined together, these two trends generate various reliable indicators. Once fused together, they will allow the anticipation of future humanitarian, social, economic and sanitary crisis, the adequate action plan to prevent them from happening, and, the provision of relief and support should they occur. Satellite imagery, 3D simulation, image analysis, mapping, georeferenced public and private data… we are getting enough tools to give the Earth a Digital Twin and this is not Science Fiction anymore.
Airbus and Dassault Systèmes joined forces to approach and reach this ambition focusing on cities. The imagery products from the Airbus constellation of satellites will be used with the simulation tools from Dassault Systèmes. The project aims at automatically building a 3D digital model of cities as well as simulating their entire environment and using them as baseline to digitise impactful events. It will cover the complete value chain of this 3D mapping analysis services: from data collection, 3D production models, simulation environment software ingestion, event and analysis models, to dissemination for studies. One of the specifics of the project is to tackle information from both a global perspective and small-scale details (0,1m, 0,05m…) in order to capture the impacts of urbanism on the environment, people’s health, economy and security.
To achieve a significant leap forward, the 3D environment used for the simulations will need to be more precise, be quickly generated, ready-to-use in a simulation environment, and made easily accessible to our customers, hence the following objectives:
• Increase the average location accuracy of the 3D models from 5-8m to 3-4m everywhere in the world, and allowing GPS reference points for use cases where submetric precision is required,
• Produce 3D models based on archive imagery (emancipating ourselves from the need to task satellites when time sensitiveness is high),
• Transform the current representation of 3D models (single canopy layer including ground, trees and man-made objects) to a model where we can isolate the ground and each 3D object,
• Ensure these 3D models can be ingested into the relevant simulation environments,
• Give rapid access to the 3D model database and the capacity to order them through our OneAtlas platform.
On the simulation side, the main challenges revolve around the need to develop new and robust methodologies for each use case (selection of relevant physical parameters, treatment of the 3D surface, hardware and software needs, overall quality expectation, etc.) and automate as many tasks as possible to reduce lead time.
There is a large variety of simulation domains such as aerodynamics, electromagnetism, hydrodynamics, fluid-structure interaction, passive scalar (pollutants, pathogens, radiologic threat agents…) or even energy performance in an urban area. Applications include construction and infrastructure industries, planning, energy, security and defence. And the simulation and results will be available via the 3DExpérience platform, as well as Airbus OneAtlas.
Such an approach combining high resolution EO and future proof simulation techniques takes is unique on the market and takes digital twins beyond the state of the art.
In a scenario of water scarcity, sound irrigation management is needed while increasing productivity in an efficient manner. Therefore, the development of new technological tools capable of helping farmers to carry out precision irrigation is essential. Irrigation scheduling is not only based on estimates of crop evapotranspiration. It is also important to know about crop water status, water allocation throughout the growing season, crop responses to water stress and its effect on yield and quality and weather forecasts. At the same time, the tool should interoperate between different source of data, the cloud which hosts the model, the farmer and the irrigation programmer.
IrriDesk® is an automatic DSS which combines digital twin and IoT technologies. IrriDesk closes the irrigation control loop autonomously, on a daily basis, importing sensor and remote sensing data and sending updated prescriptions to irrigation controllers. Simulations of the whole irrigation season are made by a digital twin to provide variable irrigation prescriptions compliant with site-specific strategies. In particular, the model assimilates estimates of the biophysical parameters of the vegetation (FAPAR) from Sentinel-2. In future versions, IrriDesk® will also assimilate estimates of crop evapotranspiration estimated obtained from surface energy balance models using Sentinel-2 and Sentinel-3 imagery (Sentinels for Evapotranspiration, SEN4ET). This study shows the results of a study case carried out in a commercial vineyard of 7.3-ha. The vineyard had three different irrigation sectors which different water requirements. A regulated deficit irrigation strategy which consisted on stressing vines during pre-veraison was adopted using IrriDesk®. Results showed the potentiality of this tool to conduct a precision irrigation since the amount of water automatically applied in each irrigation sector was able to maintain the pre-established thresholds of crop water status throughout the growing season, as well as to improve water productivity and farmer’s time saving. The total amount of water applied in each irrigation sector ranged from 175 to 195 mm. This amount of water was significantly lower in comparison to previous years and surrounding vineyards. An analysis of the spatio-temporal variability of crop evapotranspiration was also conducted using the SEN4ET approach and values compared with those simulated by IrriDesk®.
Over the last couple of decades Arctic sea ice has experienced a dramatic shrinking and thinning. The loss of thick multi-year ice in particular means that the ice is weaker and therefore more easily broken up by strong winds or ocean currents. As a consequence, extreme sea-ice breakup events are occurring more frequently in recent years which has important consequences for air-sea exchange, sea ice production and Arctic Ocean properties in general. Despite having potentially large impacts on Arctic climate, such breakup events are generally not captured in current sea-ice and climate models, thus presenting a critical gap in our understanding of future high-latitude climate.
Here we present simulations using the next generation sea-ice model – neXtSIM – investigating the driving mechanisms behind a large breakup event that took place in the Beaufort Sea during mid-winter in 2013. These simulations are the first to successfully reproduce the timing, location and propagation of sea-ice leads associated with a storm-induced breakup. We found that the sea ice rheology and horizontal resolution of the atmospheric forcing are both crucial in accurately simulating such breakup events. By performing additional sensitivity experiments where the ice thickness was artificially reduced we further suggest that large breakup events will likely become more frequent as Arctic sea ice continues to thin. Finally, we show that large breakup events during winter have a significant impact on ice growth through enhanced air-sea fluxes, and increased drift speeds which increase the export of old, thick ice out of the Beaufort Sea. Overall, this results in a thinner and weaker ice cover that may precondition earlier breakup in spring and accelerate sea-ice loss.
Radiative Transfer Models (RTMs) with spatially explicit 3D forest structures can simulate highly realistic Earth Observation data at large spatial scales (10s to 100s of m). These RTMs can help understand forest ecosystem processes and its interaction with the Earth system, as well as make much more effective use of new Earth Observation data. However, explicitly reconstructing 3D forest models at large scale (> 1 ha) requires a tremendous amount of 3D structural, spectral and other information. It is time and labor consuming, sometimes impossible to conduct such a reconstruction work at large scale. Instead, reconstructing the forest by using a “tree library” is a more practical and feasible method. Here, this library is made up of 3D trees with different characterizations (e.g., tree species, height, and diameter at breast height) that are a representative sample for the whole forest stand. This library of tree forms is used to reconstruct a full forest scene at a large scale (e.g., 100 x 100 m). By using this method, the spatial scale of the reconstructed forest scene can be easily increased to match with the possible applications (e.g., understanding forest radiative transfer processes, retrieval algorithm development, sensor design, or remote sensing calibration and validation activities.)
In this study, we investigated the optimal way to build such a tree library using different reconstruction ratios. We evaluated the accuracy of different scenarios by comparing simulated drone data with actual drone remote sensing images. More specifically, trees were clustered into different groups according to their species, height, and diameter at breast height. The number of these groups was determined according to the reconstruction ratio: the number of groups is equal to the number of trees multiplied by the reconstruction ratio. The range of reconstruction ratio is 0 to 1. For each group, a random tree was selected. 3D models of other trees in this group were replaced by this selected tree in the simulation. We evaluated the accuracy of the new forest scenes by using the Bidirectional Reflectance Factor (BRF, top of canopy). The simulated BRFs of the forest scenes, which were built with different reconstruction ratios, were compared with the drone data to evaluate their accuracy. We conducted the experiments in hyperspectral resolutions (32 wavebands from 520.44nm to 885.86 nm. We show that using new 3D measurement technology and this “tree library” method it is possible to reconstruct forest scenes with cm-scale accuracy at large spatial scale (> 1 ha), and use these as the basis of new RTM simulation tools.
Forests are an integral part for the world’s ecosystem, afforestation and deforestation are main drivers for climate change and therefore their monitoring is vital. Forest monitoring involves remotely sensed data, such as Light Detection and Ranging (LiDAR) to capture complex forest structure. Natural environments like forests are complex and add challenges in communication. Conventionally, the forest monitoring data has been analysed in 2D desktop computers, but there is a fundamental shift in this communication due to recent developments in computing and 3D modelling. With the help of game engines and the retrieved forest monitoring data, digital twins can be created.
LiDAR is used to determine exact locations and dimensions of objects. The combination of LiDAR and immersive technologies can be used for stand assessments and measurements and makes it experiential. Further, georeferenced 360-degree immersive imagery and videography complements the abstract LiDAR data with a realistic experience as naturally perceived by the human eye. A workbench provides tools to manipulate the data, including scaling and rotation, but also measurement tools including distance for tree heights, plane for calculating the diameter breast height and volume to approximate the biomass within the immersive virtual reality experience. Satellite imagery with terrain elevation data provides an overview of the research site.
We intend to present the findings of our ongoing research activities in virtual reality forest monitoring and try to answer the questions whether modified meshed LiDAR data measured in virtual reality is as accurate as conventionally measured point clouds and further whether the application helps experts in visualizing and monitoring forests. This is determined with a heuristic evaluation and a usability study. We collected our data in the Eifel national park in west Germany with terrestrial, mobile and drone mounted LiDAR, a Go Pro MAX mounted on a tripod and drones and a microphone. This Beech, Norway Spruce and Oak dominated forest is declared to become a native forest with only minimal human interaction.
The research investigates the benefits and limitations of the single elements of the application, such as the digital terrain models and map, the terrestrial, mobile and airborne LiDAR data, the 360-degree immersive media, the measurement tools, and the forest sounds. An iterative process ensures implementation of feedback from experts. The research further includes exploration of tools, such as using the PlantNet API to use the deep learning model to determine the species of trees with the help of screenshots in the 360-degree imagery within the immersion.
Inverse models are a vital tool for inferring the state of ice sheets based on remote sensing data. Remotely sensed observations of ice surface velocity can be combined with a numerical model of ice flow to reconstruct the stress and deformation fields inside the ice and to infer the basal drag and/or englacial rheology. However, velocity products based on remote sensing contain both random and correlated errors, often including artifacts aligned with satellite orbits or particular latitude bands. Here, we use a higher-order inverse model within the Ice Sheet System Model (ISSM; Larour et al., 2012) to assimilate satellite observations of ice surface velocity in the Filchner-Ronne sector of Antarctica in order to infer basal drag underneath the grounded ice and englacial rheology in the floating shelf ice. We use multiple velocity products to constrain our model, including MEaSUREs_v2 (Rignot et al., 2017), an updated version of the MEaSUREs dataset that incorporates estimates from SAR interferometry (Mouginot et al., 2019), and a new mosaic for this sector that combines data from Sentinel-1, Landsat-8, and TerraSAR-X (Hofstede et al., 2021). For each velocity source, we perform an independent L-curve analysis to determine the optimal degree of spatial smoothing (regularization) needed to fit the observations without overfitting to noise. Additionally, we test the sensitivity of the inverted results to increased noise levels in the input data, using both random normally distributed noise and correlated noise constructed to resemble the satellite-orbit patches often found in ice velocity products. Using the L-curve analysis, we evaluate which remotely sensed velocity product permits the highest-resolution reconstruction of basal drag or englacial rheology. We find that correlated errors and artifacts in the velocity data produce corresponding artifacts in the inverse model results, particularly in the floating part where the inverted rheology estimate is highly sensitive to spatial gradients of the observed velocity field. The inversion for basal drag in the grounded ice displays less sensitivity to artifacts in the input data, because the drag inversion is less dependent on spatial gradients of the observed velocities. Minimizing the rheology artifacts in the floating shelf ice requires increased regularization of the inversion, thus reducing the spatial resolution of the inversion result. Because of the large spatial scale of the artifacts present in the velocity products, it is impossible to completely remove the corresponding artifacts in the inversion result without imposing such a degree of regularization that real structure (such as shear margins and rifts) is lost. By contrast, the inversion results are quite robust to uncorrelated errors in the input data. We suggest that future attempts to construct estimates of the ice surface velocity from remote sensing data should take care to remove correlated errors and “stripes” from their final product, and that inversion results for englacial rheology are particularly sensitive to artifacts that appear in the gradients of the observed velocity.
References:
Hofstede, C., Beyer, S., Corr, H., Eisen, O., Hattermann, T., Helm, V., Neckel, N., Smith, E. C., Steinhage, D., Zeising, O., and Humbert, A.: Evidence for a grounding line fan at the onset of a basal channel under the ice shelf of Support Force Glacier, Antarctica, revealed by reflection seismics, The Cryosphere, 15, 1517–1535, https://doi.org/10.5194/tc-15-1517-2021, 2021.
E. Larour, H. Seroussi, M. Morlighem, and E. Rignot (2012), Continental scale, high order, high spatial resolution, ice sheet modeling using the Ice Sheet System Model, J. Geophys. Res., 117, F01022, doi:10.1029/2011JF002140.
Mouginot, J., Rignot, E., & Scheuchl, B. (2019). Continent-wide, interferometric SAR phase, mapping of Antarctic ice velocity. Geophysical Research Letters, 46, 9710– 9718. https://doi.org/10.1029/2019GL083826
Rignot, E., J. Mouginot, and B. Scheuchl. 2011. Ice Flow of the Antarctic Ice Sheet, Science. 333. 1427-1430. https://doi.org/10.1126/science.1208336
Rignot, E., J. Mouginot, and B. Scheuchl. 2017. MEaSUREs InSAR-Based Antarctica Ice Velocity Map, Version 2. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. doi: https://doi.org/10.5067/D7GK8F5J8M8R
Remotely sensed Earth observations have many missing values. The abundance and often complex patterns of these missing values can be a barrier for combining different observational datasets and may cause biased estimates of statistical moments. To overcome this, missing values are regularly infilled with estimates through univariate gap-filling techniques such as spatio-temporal interpolation. However, these mostly ignore valuable information that may be present in other dependent observed variables.
Recently, we proposed CLIMFILL (CLIMate data gap-FILL, in review, https://gmd.copernicus.org/preprints/gmd-2021-164/#discussion), a multivariate gap-filling procedure that combines state-of-the-art kriging interpolation with a statistical imputation method which is designed to account for dependence across variables. The estimates for the missing values are therefore informed by knowledge of neighboring points, temporal processes, and closely related observations of other relevant variables.
In this study, CLIMFILL is tested using gap-free ERA5 reanalysis data of ground temperature, surface layer soil moisture, precipitation, and terrestrial water storage to represent central interactions between soil moisture and climate. These observations were matched with corresponding remote sensing observations and masked where the observations have missing values. CLIMFILL successfully recovers the dependence structure among the variables across all land cover types and altitudes, thereby enabling subsequent mechanistic interpretations. Soil moisture-temperature feedback, which is underestimated in high latitude regions due to sparse satellite coverage, is adequately represented in the multivariate gap-filling. Univariate performance metrics such as correlation and bias are improved compared to spatiotemporal interpolation gap-fill for a wide range of missing values and missingness patterns. Especially estimates for surface layer soil moisture are improved by taking into account the multivariate dependence structure of the data.
A natural next step is to apply the developed framework CLIMFILL to a suite of remotely sensed Earth observations relevant for land water hydrology to mutually fill their inherent gaps. The framework is generalisable to all kinds of gridded Earth observations and is therefore highly relevant for the concept of a Digital Twin Earth, as missing values are infilled by increasing the dependence structure among independently observed Earth observations.
The ice sheets of Greenland and Antarctica have been melting since at least 1990, suffering their highest mass loss rate between 2010 and 2019. With mass loss predicted to continue for at least several decades, even if global temperatures stabilize (IPCC, Sixth Assessment Report), mass loss from the ice sheets is predicted to be the prevailing contribution to global sea-level rise in coming years.
Supraglacial hydrology is the interconnected system of lakes and channels on the surface of ice sheets. This surface water is believed to play a substantial role in ice sheet mass balance by modulating the flow of grounded ice and weakening floating ice shelves to the point of collapse. Mapping the distributions and life cycle of such hydrological features is important in understanding their present and future contribution to global sea-level rise.
Using optical satellite imagery, supraglacial hydrological features can be easily identified by eye. However, given that there are many thousands of these features (~76,000 features identified across Antarctica in January 2017, for example), and they appear in many thousands of satellite images, accurate, automated approaches to mapping these features in such images are urgently needed. The standard approach to map these features often combines spectral thresholding (Normalised Difference Water Index, NDWI) with time-consuming manual corrections and quality control processes. Given the volume of the data now available, however, methods such as those that require manual post-processing are not feasible for repeat monitoring of surface hydrology at a continental scale. Here, we present results from ESA’s Polar+ 4D Greenland, 4D Antarctica and Digital Twin Antarctica projects, which increase the accuracy of supraglacial lake and channel delineation using Sentinel-2 and Landsat-7/8 imagery, while reducing the need for manual intervention. We use Machine Learning approaches, including a Random Forest algorithm trained to classify surface water from non-water features in a pixel-based classification.
Appropriate Machine Learning algorithms require comprehensive, accurate datasets. Because of a lack of in situ data, one of the few options we have available is to generate such datasets from satellite imagery. We, therefore, generate these datasets to carry out rigorous, systematic testing of the Machine Learning algorithm. Our methods are trained and validated over varied spatial and temporal (seasonally: within the melt-season, and yearly: between melt-seasons) conditions using data covering a range of glaciological and climatological environments. Our approach, designed for easy, efficient rollout over multiple melt-season, uses optical satellite imagery alone. The workflow, developed under Google Cloud Platform, which hosts the entire archive of Sentinel-2 and Landsat-8 data, allows for large-scale application over Greenlandic and Antarctic ice sheets and is intended for repeated use throughout the future melt-seasons. Ice sheets, a crucial component of the Earth System, impact global sea level, ocean circulation and biogeochemical processes. This study shows one example of how Machine Learning can automate historically user-intensive satellite processing pipelines within a Digital Twin, allowing for greater understanding and data-driven discovery of ice sheet processes.
Constantly fed with Earth observation data, combined with in situ measurements, artificial intelligence, and numerical simulations, a Digital Twin of the Earth will help visualise the state of the planet, and enable what-if scenarios supporting decision making. In September 2020, ESA began a number of precursor projects with the aim of prototyping digital twins of the different key parts of the Earth’s system including the Antarctic Ice Sheet system.
The Antarctic Ice Sheet is a major reservoir of freshwater in the world with a huge potential to contribute to sea level rise in the future, having a large impact on atmospheric circulations, and on oceanic circulation and bio-chemical activity. Digital Twin Antarctica brings together Earth Observation, Models and Artificial Intelligence to tackle some of the processes responsible for the surface and basal melting currently taking place, and it’s impact.
Here we propose a live demonstration of the Digital Twin of Antarctica prototype via an immersive 4D virtual world allowing one to interactively navigate the Antarctica dataset through space and time, and to explore the synergies between observations, numerical simulations, and AI. Case studies will illustrate how assimilation of the surface observation of melt can help to improve regional climate models, how combining satellite observation and physics leads to detailed quantification of melt rates under the ice sheet and ice shelves, and how it helps predict pathways and fluxes of sub-glacial meltwater under the ice sheet as well as its interaction with the ocean as it emerges from under the ice sheet and creates buoyant meltwater plumes.
In addition, the interactive demonstration will show how assimilating models with Earth Observation data in a service orientated architecture with underlying data lake and orchestration framework is paramount to enabling the calculation and exploration of scenarios in an interactive, timely, transparent and repeatable manner.
AI4EO: from physics guided paradigms to quantum machine learning
Earth Observation (EO) Data Intelligence is addressing the entire value chain: data processing to extract information, the information analysis to gather knowledge, and knowledge transformation in value. EO technologies have immensely evolved the state of the art sensors deliver a broad variety of images, and have made considerable progress in spatial and radiometric resolution, target acquisition strategies, imaging modes, geographical coverage and data rates. Generally imaging sensors generate an isomorphic representation of the observed scene. This is not the case for EO, the observations are a doppelgänger of the scattered field, an indirect signature of the imaged object. EO images are instrument records, i.e. in addition to the spatial information, they are sensing physical parameters, and they are mainly sensing outside of the visual spectrum. This positions the load of EO image understanding, and the outmost challenge of Big EO Data Science, as new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). The presentation introduces specific solutions for the EO Data Intelligence, as methods for physically meaningful features extraction to enable high accuracy characterization of any structure in large volumes of EO images. The theoretical background is introduced, discussing the advancement of the paradigms from Bayesian inference, machine learning, and evolving to the methods of Deep Learning and Quantum Machine Learning. The applications are demonstrated for: alleviation of atmospheric effects and retrieval of Sentinel 2 data, enhancing the opportunistic bi-static images with Sentinel 1, explainable data mining and discovery of physical scattering properties for SAR observations, and natural embedding of the PolSAR Stokes parameters in a gate-based quantum computer.
Coca Neagoe, M. Coca, C. Vaduva and M. Datcu, "Cross-Bands Information Transfer to Offset Ambiguities and Atmospheric Phenomena for Multispectral Data Visualization," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 11297-11310, 2021
U. Chaudhuri, S. Dey, M. Datcu, B. Banerjee and A. Bhattacharya, "Interband Retrieval and Classification Using the Multilabeled Sentinel-2 BigEarthNet Archive," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 9884-9898, 2021
A. Focsa, A. Anghel and M. Datcu, "A Compressive-Sensing Approach for Opportunistic Bistatic SAR Imaging Enhancement by Harnessing Sparse Multiaperture Data," in IEEE Transactions on Geoscience and Remote Sensing, early access
C. Karmakar, C. O. Dumitru, G. Schwarz and M. Datcu, "Feature-Free Explainable Data Mining in SAR Images Using Latent Dirichlet Allocation," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 676-689, 2021
Z. Huang, M. Datcu, Z. Pan, X. Qiu and B. Lei, "HDEC-TFA: An Unsupervised Learning Approach for Discovering Physical Scattering Properties of Single-Polarized SAR Image," in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 4, pp. 3054-3071, April 2021
S. Otgonbaatar and M. Datcu, "Natural Embedding of the Stokes Parameters of Polarimetric Synthetic Aperture Radar Images in a Gate-Based Quantum Computer," in IEEE Transactions on Geoscience and Remote Sensing, early access
Digital twins are becoming an important tool in the validation process of satellite products. Many downstream satellite products are created based on a complex chain of processing procedures and modelling techniques. Vegetation biophysical products are a classic example of this, particularly in forests where the 3D arrangement of canopy constituents is heterogeneous and its variability across different forest types is high. This means that satellite product algorithms applied to forests employ a range of assumptions about the forest constituents and illumination characteristics in order to best estimate quantities such as the fraction of absorbed photosynthetically active radiation (fAPAR) and leaf area index (LAI). This leads to a definition difference between the quantity being (assumed to be) measured by the satellite sensor and that which is actually measured on the ground using in situ measurement techniques (which might also have their own assumptions). Simulation studies using digital twins offer a way to overcome these issues.
This contribution describes an fAPAR validation exercise of the Sentinel-2 fAPAR product over Wytham Woods (UK) for 2018. It combines in situ measurements of fAPAR with correction factors derived from radiative transfer (RT) simulations on a digital twin of Wytham Woods. The digital twin (which is open source) is based on datasets collected during the summer and winter of 2015/2016 and represents a 1 ha area of temperate deciduous forest. The leaves and stems are derived from LiDAR point clouds collected every 20 metres throughout the forest and combined with spectral measurements of the respective canopy and understory components (bark, leaves, soil, etc.). This model represents a useful surrogate with which to test canopy configurations and forest structure assumptions that are impossible at the real study site. As an example, in certain satellite fAPAR products it is assumed that only photosynthesising elements are present in the canopy (green fAPAR). To analyse a situation such as this, in the model we can remove the stems and branches from the RT simulations and compare that to simulations on the full model to assess the differences.
Combined with this, we use a PAR network located at Wytham Woods to derive fAPAR. Each sensor in this network is calibrated and produces results that have a well characterised uncertainty and are traceable to SI. Using the Wytham Woods digital twin we are able to simulate a reference fAPAR value would be under a specific set of illumination conditions since it is possible to track the fate of each photon/ray in the scene. As a result, we have a form of traceability defined as virtual traceability to this reference.
Using the measurement and modelling component discussed above, we were able derive correction factors for the satellite and in situ measurements (relative to the reference value). Allowing the in situ and satellite values to be compared through a common intermediary. The results show that the correction factors reduced the deviation between the in situ and satellite-derived fAPAR. Since the digital twin is representative of the summer months (leaf-on), the deviations (post-correction) are largest in the winter, with a quick decrease in the spring (with leaf production) and a slow increase from July to October as senescence takes place.
This work provides a highly detailed look at a single forest location and single satellite product. Given the large biases found, and corrected for, it suggests that future work is required to understand how these biases (and subsequently the correction factors) change in space (e.g. for different biomes, etc.), time and for different satellite products. This means that the fAPAR (and other vegetation related satellite products) community should create many more forest digital twins to facilitate this. This is a top priority if we are to reach the GCOS requirements for fAPAR (measurement uncertainty of < (0.05 or 10%)) and, more importantly, if downstream users of these products are to trust them.
Recent breakthroughs in building a quantum computer with very few quantum bits (qubits) and in applying Machine Learning (ML) techniques to any annotated datasets, led to quantum Machine Learning (qML) and practical Quantum Algorithms (QA) being considered as a promising disruptive technique for a particular class of supervised learning methods and optimization problems. There is growing interest to apply a qML network and QAs to classical data/problems. However, the QML network and QAs are posing several new challenges, for instance, how to map classical data to qubits (quantum data) due to the limited number of qubits of quantum computers, or how to use the specificity of the “qubits” to obtain advantages over non-quantum computing techniques, while ubiquitous data/problems in practical domains has a classical nature.
Furthermore, quantum computers emerge as a paradigm shift to tackle practical (intractable) Earth observation problems from a new viewpoint with the promise to speed up a number of algorithms for some practical problems. In recent years, there is growing interest to employ quantum computers for assisting machine learning (ML) techniques as well as a conventional computer for supporting quantum computers. Moreover, researchers both in academy and industry are still investigating QML approaches and QAs for discovering patterns or speeding up some ML techniques for finding highly-informative patterns in big data.
Remotely-sensed images are used for Earth observation both from aircraft or satellite platforms. The images acquired by satellites are available in digital format and contain information on the number of spectral bands, radiometric resolution, spatial resolution, etc. We performed the first exploratory studies for applying QML and QAs to remotely-sensed images and problems by using a D-Wave quantum annealer and a gate-based quantum computer (an IBM and Google quantum computer). Such quantum computers solve optimization problems and run ML methods by exploiting different mechanisms and techniques of quantum physics. Therefore, we present the differences for solving problems on a D-Wave quantum annealer and a gate-based quantum computer, and how to program these two quantum computers to advance Earth observation methodologies according to our gained experiences, as well as the challenges being encountered.
References:
[1] S. Otgonbaatar and M. Datcu, "Classification of Remote Sensing Images With Parameterized Quantum Gates," in IEEE Geoscience and Remote Sensing Letters, doi: 10.1109/LGRS.2021.3108014.
[2] S. Otgonbaatar and M. Datcu, "Natural Embedding of the Stokes Parameters of Polarimetric Synthetic Aperture Radar Images in a Gate-Based Quantum Computer," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3110056.
[3] S. Otgonbaatar and M. Datcu, "Quantum annealer for network flow minimization in InSAR images," EUSAR 2021; 13th European Conference on Synthetic Aperture Radar, 2021, pp. 1-4.
[4] S. Otgonbaatar and M. Datcu, "A Quantum Annealer for Subset Feature Selection and the Classification of Hyperspectral Images," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 7057-7065, 2021, doi: 10.1109/JSTARS.2021.3095377.
The measurement of hidden social and financial phenomena has traditionally relied upon a-priori theories linking legal and measurable flows to the shadow, or unmeasurable flows through relationships between known patterns and distributions in social, financial and transactional data. Detecting these types of flows becomes problematic at a sub country level as many of the core indicators related to migration, demographics, income, and conflict are only reported at country or district level.
Recent work has shown the utility of machine learning techniques to improve the spatial resolution of many of these indicators by building relationships with satellite data and ground-based estimates. For example, gridded population and demographic estimates are available across Africa at 100 m resolution by combining building detections and social media information with regional level demographics. Estimates of asset wealth and income inequality and their change through time have also been estimated with by drawing on features from both daytime and night-time imagery trained with street level imagery. Connectivity, or the prediction of links between sources and sinks which can be used to describe the flow of materials of value or the movement of people in response to external influences such as financial stress or conflict, have also been modelled using machine learning techniques. Recent work has demonstrated these approaches offer advantages over traditional gravity and radiation-based modelling in data poor environments.
Many of these techniques are, however, targeted towards tame or structured policy problems where there is consensus among stakeholders and significant certainty around the facts and causality. In this project we consider understanding financial flows resulting from artisanal or small-scale gold mining across Ghana and Burkina Faso. Here the facts and causality are uncertain and, although there is consensus among stakeholders, the scale of the undertaking across the two countries means that there is significant debate around causality or indeed even what field data is available to help the understanding. For this type of problem, spatial data struggles provide a definitive solution but can be used as an advocate to test theories and provide bounds on the drivers of behaviour and flows.
This research reports on the development of a socio-economic digital twin driven by satellite and spatial data from sources including Sentinel 1 and 2, harmonised night-time lights, SM2RAIN-CCI data, the Copernicus DEM, landcover and land use products, high resolution population density estimates and open street map data that are linked to estimates of mine expansion, income, trade, conflict, and demographics. The digital twin runs machine learning workflows within a Jupyter notebook that facilitate the spatial scaling of indicators and helps build an understanding of the spatial linkages, correlations, and uncertainties between socio-economic indicators across regions and countries. This tool is being used within a multidisciplinary research project to explore theories of mining expansion and linkages with conflict and financial flows as well as to help decision-makers target interviews and field data collection and explore the effects of potential policy changes.
At the heart of the UrbAIn project is the integration of different types of data in order to develop novel Digital Twin services that can be integrated into the daily functions of urban living: for both public authorities as well as citizens. Urban planning today is subject to numerous challenges including changes in demographics, urban-rural migration, rapid urbanisation, limited space, traffic, environmental degradation, pollution, and climate changes are only some of the aspects that influence the planning and development of the future city. By creating digital data that can be visualized in virtual environments and supported by modeling tools make it possible to support these processes and create digital twins that are not only synchronized with the real world but can be used to test alternate futures based on choosing different scenarios. Earth observations (EO) can provide important foundations for urban planning both on the ground and in the atmosphere.
The digital twin is a virtual construct of a city in digital space that can be visualized and manipulated. In order to achieve this however, the associated real world information and infrastructure must be available in the form of digital maps or models combined with dynamic real-time data generated by sensors across the city. This enables users to quickly record and evaluate current situations, as well as simulate future measures and test their effects. Due to the heterogeneous, complex data and the large amounts of data, artificial intelligence (AI) algorithms are an important prerequisite for the implementation of digital urban twins. The first "AI revolution" also offers options for remote sensing in order to fully exploit the potential of the rapidly growing amounts of data. For the valorization of spatial, temporal and spectral properties of the remote sensing data, AI algorithms are particularly powerful, because they offer the possibility of a largely automated and scalable data evaluation, which is necessary in the age of big EO data. The prerequisites for this are extensive training data, development environments and cloud computing.
In the UrbAIn Project, supported through a grant from the German Federal Ministry for Economic Affairs and Energy, new EO and AI processes for evaluating, merging and displaying various data in the context of Digital Twins are being developed in order to make cities more livable and sustainable. Specifically, we will showcase our latest results related to the methods for the acquisition, processing and reproduction of spatial data in a public context, taking into account AI techniques and state of the art environmental sensors.
The ocean plays a crucial role in sustaining life on Earth: it regulates our climate and its resources, while the ecosystem services contribute to our economy, health and wellbeing. The role of the ocean in addressing the challenges of future food and energy supply is increasingly recognized as part of the European Green Deal, as is the potential of ocean resources as raw material or inspiration for future innovation. Nowadays the ocean is exposed to pressures at both anthropogenic level (transports, tourism, trades, migration) and environmental level (climate change, ocean warming, salinization, extreme events) that implies a need for innovative and modern monitoring tools to identify threats, to predict risks, to implement early warning systems and to provide advanced decisions support systems based on observations and forecasts. Such tools should integrate available data from in-situ sensors and satellites to enhance the performance of high-resolution state-of-the-art models simulating ocean processes and exploiting data analytics tools to access what-if-scenarios.
The EC recently funded the ILIAD project, through the Horizon 2020 Research and Innovation Programme, which aims at developing, operating and demonstrating the ILIAD Digital Twin of the Ocean (DTO). ILIAD will develop an interoperable, data-intensive and cost-effective DTO, capitalizing on the explosion of new data provided by many different earth sources, modern computing infrastructure including the Internet of Things, social networking, big data, cloud computing and more. It will combine high-resolution modelling with real-time sensing of ocean parameters, advanced AI algorithms for forecasting of spatiotemporal events and pattern recognition. The DTO will consist of several real-time to near-real-time digital replicas of the ocean.
The current work presents ongoing and planned activities for a coastal pilot around Crete, Greece, to be demonstrated in the frame of the ILIAD project. The pilot will combine advanced, high-resolution forecasting services based on numerical hydrodynamic, sea state and particle tracking/oil spill models, enhanced by the integration of Sentinel data and in-situ observations from low-cost wave meters, drifting trackers, drones equipped with met-ocean sensors, as well as citizen/social network sensing. The COASTAL CRETE platform will be integrated into ILIAD for a seamless, robust and reliable access to Earth Observation (EO) data and Copernicus Med MFC products to be integrated into the met-ocean forecasting models and EO data triggering the oil spill model. The COASTAL CRETE pilot will feed results of oil spill fate and transport for the ILIAD DTO. The interaction between the pilot and the ILIAD DTO is essential for oil spill detection. The COASTAL CRETE pilot aims to:
- support and increase the efficiency and the optimization of critical infrastructure operations (e.g., ports) by providing reliable and very high-resolution forecast data, alerts and early warning services for regular day-to-day operational activities;
- support regional authorities in marine spatial planning;
- support regional and local authorities in early detection of and response to oil spill pollution events.
Through the above-mentioned activities, this work aims at contributing to the ILIAD major goal of supporting the implementation of the EU’s Green Deal and Digital Strategy and the seven UN Ocean Decade’s outcomes in close connection with the 17 Sustainable Development Goals (SDG).
Acknowledgement: Part of this research has received funding from the European Union’s Horizon 2020 research and innovation programme under GA No 101037643. The information and views of this research lie entirely with the authors. The European Commission is not responsible for any use that may be made of the information it contains.
2021 is the start of the UN Decade of Ocean Sciences for Sustainable development. Building a digital twin of the ocean or digital twins of the ocean will contribute to this important focus area. The ILIAD Digital Twin of the Ocean, a H2020 funded project, builds on the assets resulting from two decades of investments in policies and infrastructures for the blue economy and aims at establishing an interoperable, data-intensive, and cost-effective Digital Twin of the Ocean. It capitalizes on the explosion of new data provided by many different Earth observation sources, advanced computing infrastructures (cloud computing, HPC, Internet of Things, Big Data, social networking, and more) in an inclusive, virtual/augmented, and engaging fashion to address all Earth data challenges. It will contribute towards a sustainable ocean economy as defined by the Centre for the Fourth Industrial Revolution and the Ocean, a hub for global, multistakeholder co-operation.
The ILIAD Digital Twin of the Ocean will fuse a large volume of diverse data, in a semantically rich and data agnostic approach to enable simultaneous communication with real world systems and models. Ontologies and a standard style-layered descriptor will facilitate semantic information and intuitive discovery of underlying information and knowledge to provide a seamless experience. The combination of geovisualisation, immersive visualization and virtual or augmented reality allows users to explore, synthesize, present, and analyze the underlying geospatial data in an interactive manner.
The enabling technology of the ILIAD Digital Twin of the Ocean will contribute to the implementation of the European Union’s Green Deal and Digital Strategy and to the achievement of the UN Ocean Decade's outcomes and Sustainable Development Goals. To realize its potential, the ILIAD Digital Twin of the Ocean will follow the System of Systems approach, integrating the plethora of existing EU Earth Observing and Modelling Digital Infrastructures and Facilities
To promote additional applications through ILIAD Digital Twin of the Ocean, the partners will create the ILIAD Marketplace. Like an app store, providers will use the ILIAD Marketplace to distribute apps, plug-ins, interfaces, raw data, citizen science data, synthesized information, and value-adding services derived from the ILIAD Digital Twin of the Ocean.
Orbiter is an Earth visualization application for iPhone and iPad. It presents a virtual Earth to the user, enabling deep and engrossing interaction through vivid 3D graphics and augmented reality.
Orbiter's data comes from the Sentinel satellites. We collect Sentinel 2 imagery, as well as Sentinel 3 and 5P sensor data, process this information into high-resolution imagery, and present it to the user through our app. The globe comes alive, revealing recent satellite imagery at Sentinel 2's maximum 10m resolution. A menu of overlays is available, each presenting an animated time-lapse layer of data. These include weather, air pollution, oceanic data, and more.
Orbiter's backend is OPIE: the Orbiter Planetary Intelligence Engine. This major component of the application runs on the server side, automatically downloading Sentinel's latest data files from SciHub and CODA. Our servers do extensive processing on this data to make it easily accessible to the end user of our app. Raw images as converted from the original JPEG2000 format into device-friendly tiles of 1024x1024 pixels, with no loss of resolution and virtually no loss of pixel data. They are further compressed into ASTC format, an advanced texture compression technique normally used in 3D games. This compression combined with efficient programming techniques and the most high-performance graphics technologies available, i.e. Metal, enables an engrossing, full-frame-rate user experience.
Data from the Sentinel 3 and 5p satellites is processed from its native raw NetCDF form into continuous-tone greyscale images. Data from satellite orbital sweeps is concatenated to form near whole-earth images, which are then arranged sequentially and compressed into video form. In this way, we apply traditional graphics and video compression techniques to data, yielding massive performance benefits. This allows the user of Orbiter to not only see fully animated overlays of data, but also to select a point and perform an instantaneous analysis of a particular geographical location. By simply tapping, a user can generate, for example, a time graph of NO₂ air pollution over the city of Tokyo.
Orbiter is designed to take ESA's massive collection of EO data, and make it accessible to as many people as possible. This has broad benefits for ESA's mission and for the communication of ESA's work to the public. Orbiter could be used in schools, in corporations, by researchers and engineers, by anyone with a curiosity about our planet and environment. Orbiter's mission to to make EO data available to everyone.
Sextans, Telescopes, Satellites & Python: foregrounding political-technical trade-offs to develop ‘trust-worthy’ Digital Twins of Earth’s ecosystems for effective European policy-making.
Over the next 7-10 years. ESA's DestinE project aims to create Digital Twins (DTs) of the Earth’s ecosystems, developed by the scientific and engineering communities, for European policy makers to aid decision-making. Designing DT’s for Earth’s ecosystems are exceptionally challenging, as the Blue Planet is a complex set of overlapping dynamic systems that are not (yet) clearly understood. For example, the cause of polar amplification remains unknown (where Arctic temperatures are rising two to three times faster than in the Tropics). Hence, the concept of duplicating the ‘inner workings’ of the Earth’s ecosystems is, arguably, unrealistic. Instead, a series of trade-offs, and known unknowns, evolve throughout the development process and guide the building, commissioning, validating and end applications of these complex dynamic technologies. Trade-off examples include:
o Multiple sources of data are required to build the model from (near) real-time data via sentinel satellites to in situ sensors (ground to UAVs and airborne) that range from ‘trust-worthy/good-enough’ data from simulations and observations to more circumspect extrapolated data. The assimilation of these data result in a series of trade-offs between the type and quality of the data wanted and the data available.
o Effective dynamic models require multiple sources of ‘good enough data’ for the purpose at hand – and also a time-efficient model that will take too long to run and, therefore, becomes too expensive to use. The trade-off is between the simplification of complex processes that keeps the core processes of a dynamic model that guides not misleads the user.
This paper will set out a case that replicating the Earth’s ecosystems is, arguably, unrealistic, due to significant levels of uncertainty in current knowledge. But identifying, and foregrounding, the political-technical trade-offs within the development process, from the onset of the project (low technology readiness level), can lead to trustworthy DT’s of the Earth ecosystems to effectively guide politically sensitive decision-making.
These political-technical trade-offs and discussions are not new and have underpinned global map-making for centuries. For example, in the mid 1600s, when explorers set out to map the far reaches of the ‘unknown’ globe, intense disagreement arose on the most trustworthy sources of ‘data’ between experienced mariners who sailed the high seas and drew on every day knowledge and Investors who drew on theoretical knowledge. The final maps developed proved useful for some nations and less for others. DTs are a form of contemporary dynamic maps that have moved from paper to the cyber-physical but, importantly, the real-world political situations still remain in the everyday world.
Cryosphere Virtual Lab is a project funded by the European Space Agency and will build of a system that will use recent information and communication technologies to facilitate the exploitation analysis, sharing, mining and visualization of the massive amounts of earth observation data available. The system will utilize available satellite, in-situ and model data from ESA/EU, Svalbard Observatory (SIOS) and other sources. CVL will foster collaboration between cryosphere scientists and allow to reduce the time and effort spent searching for data, and to develop their own tools for processing and analysis.
CVL is currently developing the landing page cvl.eo.esa.int/ and the backbone data source services related to CVL (data search and access). Parts of the system is already functional. We are also working jointly with ESA PTEP (Polar Thematic Exploitation Platform) to provide cloud computing resources. The vision behind CVL is in the long run to provide a platform where cryosphere science can be carried out easily, and where users can be inspired by readily access to open science, data, computing resources and a library of processing tools (Jupyter scripts) for EO data.
We demonstrate the feasibility of the system in 5 use cases on a wide field of applications including snow, sea ice and glaciers. CVL will also fund 20 early adopters (PhD/Postdoc-level) that shall explore the system for their own applications.
The system will be built upon open scientific standards, and data as well as code will be published openly to allow users to adapt the system to their interests. The system will also provide tools for visualization in 2D and 3D. CVL will continue to live after the 3year project has been finalized and aims at providing free-of-charge services for the users that are interested in delivering new information about the rapid changing arctic cryosphere.
The ESA phi-Lab Artificial Intelligence for Smart Cities (AI4SC) project has been successfully completed in July 2020. Specifically, its main objective has been the generation of a set of indicators at global scale to track the effects of widespread urbanization processes and, concurrently, a set of indicators to help addressing key challenges at local scale. In this latter framework, from constructing exchanges with the project users, it clearly emerged the need for more detailed 4D information which allows to characterize in high detail the morphology of the urban environment and, alongside, the possibility of integrating any spatiotemporal georeferenced dataset for advanced analyses. These requirements represented the basis for a dedicated “Digital Twin Urban Pilot – DTUP”, whose goals are:
i) to develop a system that allows to create, visualize and explore pilot 4D digital twins (DTs) of the ESA-ESRIN establishment and the Frascati town center generated from ultra-high resolution (UHR) drone imagery;
ii) to showcase their high potential for integrated and advanced analyses once combined with different types of spatiotemporal data and by means of state-of-the-art machine- and deep-learning (ML/DL) techniques.
In the first phase of the activity, several efforts have been spent towards the planning and implementation of a comprehensive in situ mission aimed at collecting UHR data over the two areas of interest (i.e., ~50 hectares overall). After being granted the proper permits from the Italian Authorities, visible and multispectral nadir and oblique drone imagery has been collected from an altitude of ~120m. In, particular, several flights have been performed to achieve global 3 and 7 cm ground resolution for the visible and multispectral, respectively. Complete, textured, 3D digital surface models have been then generated for both ESRIN and Frascati, exhibiting a remarkable spatial detail. These represent the core of the target DT platforms, which are structured in three different components, namely: i) a browser web application; ii) a smartphone application; iii) an application specific to wearable devices (i.e., smart glasses). All three are based on Cesium, the state-of-the-art web-based suite that enables fast, high quality and data efficient rendering of 3D content on desktop and mobile devices. As a baseline, the DTs have been populated with a number of geospatial datasets of different nature which enable to access relevant information specific to the target AOIs. Among others, these include all suitable layers available from Open Street Map, as well as from the Regione Lazio and Rome Metropolitan Area geoportals, plus a number of urban form and morphometrics indicators.
To assess the potential of the DTs in support of urban related thematic applications, two different approaches have been considered. On the one hand, use cases have been defined for demonstrating the unique 4D visualization features to display key datasets in an immersive fashion. In particular, this enables both expert and non-expert users, as well as decision makers to easily interpret complex data and possibly consider dependencies, trends and patterns e.g. by switching between multiple layers or concurrently displaying more layers at once. In this context, satellite-based indicators have been generated, including land surface temperature (computed from multitemporal Landsat and Sentinel-3 imagery) and land subsidence (generated from multitemporal Sentinel-1 data), along with mobility information derived from anonymized High Frequency Location Based (HFLB) GPS traces.
On the other hand, the idea is to employ advanced artificial intelligence approaches for jointly exploiting different multisource datasets included in the DTs at once and generate novel products. Here, the attention has been focused on applying super-resolution methods for generating enhanced Sentinel-2 imagery by exploiting UHR orthophotos collected from the drone campaign. This be ultimately employed to target: i) the automatic classification of building materials, which is a key requirement for an effective characterization of the urban metabolism; and ii) the automatic detection of the trees in the study regions.
The ability of unmanned aerial vehicles (UAVs) to acquire data in a non-intrusive way is a definite advantage for the development of decision support tools for the monitoring of complex sites. For this purpose, active landfills, due to the continuous ground operations, such as earthmoving and depositing of miscellaneous waste with heavy machinery, as well as the pronounced topography, are sensitive sites where UAVs are relevant compared to traditional ground survey methods. Legislation requires the quarterly monitoring of the site’s topography, ground instability risk and landfill completion rate with respect to the environmental permits. Mapping of landfill infrastructure and monitoring of biogas and leachate leaks is also crucial for controlling authorities. This research led to the development of three cost effective solutions to support day-to-day activities and controlling campaigns over landfills by site managers and competent authorities: the monitoring of the land cover (LC) of the site, the monitoring of its topography, the detection of biogas emissive areas and leachate leaks.
First, visible orthomosaic with centimetric spatial resolution provides an unparalleled image for visual site and infrastructure inspection as well as for LC classification. A state-of-the-art object-oriented image analysis (OBIA) approach initially designed for the processing of very high-resolution satellite data was successfully applied to UAVs data to maps LC of a 30 ha landfill site. The optimization of this processing chain through texture computation and feature selection has made it possible to achieve an overall accuracy higher than 80% for a nine-LC-categories classification. These classes include various types of waste, bare soils, tarps, green and dry vegetation, road and built-up infrastructures. Active landfill LC is usually very fragmented and evolves significantly from day to day. Therefore, such an automated method is useful for spatial and temporal monitoring of dynamic LC changes.
Second, digital surface model (DSM) is a classic by-product of photogrammetric processing. In addition to its use in draping the orthophotos mosaic for a tridimensional visualization of the site, DSM allows for a precise monitoring of topography, slopes, volumetric change and volumetric estimation of deposits. In this study, the comparison between UAVs DSM and ground topographic surveys shows that UAVs DSM models completely and finely all topographical features in a short space of time (less than a day) while a ground-based topographic survey could take several days. This completeness of the measure and its non-intrusive character are a clear advantage according to the site managers. Still, well-known limitations are that it does not allow reaching the same standards of quality at the level of the ground survey points taken by GPS/GNSS solutions (precision in Z around 10 cm for UAVs DSM versus 2 cm) and is impacted by the presence of vegetation.
Thirdly, thermal data are used as a proxy for the detection of emissive areas and leachate leaks. Indeed, the degradation processes of the waste lead to a heating of the buried bodies up to 60°C. Although this heating is reduced at the surface, this research confirms again the hypothesis that the temperature differential allows the detection of weakness areas and warmed liquids. Flying in optimal conditions (at dawn, cold, dry and not windy weather), our thermal mosaic dataset allowed the detection of three leachate leaks and one biogas emitting area. Such tool speeds up the control procedure in the field and allows the rapid implementation of corrective measures to avoid greenhouse gas emissions, optimize biogas collection for energy production, and reduce odors and risks of explosion or internal fire.
The three decision support tools developed will now be operationally integrated into the administration's landfill control activities. Data acquisition and processing can theoretically be done in less than a day, but is still highly dependent on flight clearances and weather conditions. Several derived applications are envisaged, in particular for the follow-up of other sites at risk or the qualification and quantification of illegal activities such as clandestine deposits. The whole should contribute to a more efficient and less costly monitoring of our environment.
SunRazor is an integrated drone platform composed by a master unit (acquatic drone) and a slave unit (a quadcopter UAV). The focus of the drone project was to create an advanced platform with zero environmental impact, capable of merging the state of the art of the enabling technologies within a single system with next-generation operational capabilities. SunRazor was born from a vision focused on the application of the most advanced technologies existing today in the sectors of aerospace and nautical design, sensors, electronics and machine learning applied to the problems of environmental monitoring and safety.
Starting from the knowledge that the surface of the ocean, the atmosphere and the clouds form an interconnected dynamic system through the release and deposition of chemical species within the nano-particles, a phenomenon that relates these three environments to each other, which is called sea spray aerosol (SSA). In fact, at the interface between the sea surface and the air nano-particles are formed containing biogenic and geogenic compounds with concentration distributions along thermocline lines. So, from the ocean to the clouds, dynamic biological processes control the composition of seawater, which in turn controls the primary composition of SSA. The fundamental chemical properties of primary SSA regulate its ability to interact with solar radiation directly and indirectly (through the formation of cloud condensation nuclei (CCN) and ice nucleated particles (INP) and undergo secondary chemical transformations.
The SunRazor platform is able, thanks to the computing power installed on board and the powerful short and medium range communications infrastructure it is equipped with, to perform sampling and surveys not only in aquatic scenarios, but also in mixed air/water scenarios. In this configuration, the aquatic unit of the platform (master) operates in synergy with a second air unit (slave), a highly specialized multicopter tethered to the master unit, which becomes an integral part of the drone (see the figure).
During the development of a mission plan, the aquatic platform will be able to activate, following a predefined ruleset or in response to the detection of specific events, the air unit which can operate simultaneously and independently of the aquatic unit. However, in this mixed configuration, the air unit will also benefit from the computational and medium-range protected communication capabilities of the aquatic unit, which will constitute for it a real mobile command and control station. Thanks to this local topology, the two units can be focused on highly specialized operational tasks, minimizing the presence of duplicate and redundant components. The air unit can be equipped with a payload of sensors independent from those of the aquatic unit in order to monitor different aspects of the operating scenario within which the SunRazor platform operates.
The marine unit, i.e. the master, is equipped with a propulsion system based exclusively on renewable energy (solar energy and hydrogen cells), capable of operating in autonomous, semi-autonomous and supervised mode to perform monitoring missions, and environmental control for long periods of time (over 30 days of operational autonomy). In this way SunRazor is capable of carrying out sampling of the SSA in continuous mode and with a level of positional precision in the order of a few centimeters compared to the set mission target, making use of set of state-of-the-art proprietary sensors, through which it is possible to detect, simultaneously and in real time, a high number of critical quantities for the purposes of assessing the quality of the water and the environment at different heights between the sea surface and the air column to about 50m, thanks to the UAV component of the platform (the slave/multicopter). The main features of the drone are illustrated below.
1) A zero emission propulsion system since the SunRazor is able to carry out long-term detection missions, up to 30 days, using exclusively energy deriving from renewable sources that cause zero environmental impact. In fact, the marine unit is equipped with an all-electric power system that is powered by solar energy, thanks to the use of a large surface area of high-performance photovoltaic panels that entirely cover the upper part of the hull. The propulsion system consists of a highly innovative electric motor capable of delivering peak speeds of over 30 knots and cruising speeds of 6 knots. The photovoltaic panels are integrated by a secondary power system based on safe and modular hydrogen cells, which can be used to integrate the output of the photovoltaic panels in situations of limited production or particularly high power requirements (high-speed travel). The two power sources present on board of the master unit are constantly monitored by an advanced battery management infrastructure supported by forecast models based on machine learning techniques. The battery management module is also able to make accurate predictions relating to the residual operating autonomy of the platform by making use of forecasting systems based on recent chemical-physical models capable of representing the state of the battery system in an extremely precise manner.
2) A redundant telecommunications system thanks to which SunRazor is able to exchange information and command flows with ground stations. The system makes use of high bandwidth and low consumption radio modules that can be used in aggregate mode or individually in the event of malfunctions, in order to ensure high levels of redundancy in all operational scenarios, including the most critical ones. All communications are protected by state-of-the-art AHEAD-class cryptographic algorithms that combine military-grade security guarantees with high performance in the validation and decoding phases of the transmitted data.
3) An extremely advanced on-board ICT infrastructure, which effectively makes it a miniaturized mobile data center. The heart of SunRazor is in fact constituted by a parallel calculation system with reduced consumption which represents the central infrastructure for the collection of data coming from on-board sensors, for the communications management, mission planning and information processing. The architecture implemented makes pervasive use of virtualization techniques to ensure maximum safety of the operating environment and a high degree of redundancy in the event of malfunctions of one or more computing nodes of the system. The processing core of the drone is the infrastructure within which the forecast models and classifiers used to implement the autonomous analysis capabilities of SunRazor are run. The computational core of the drone makes use of specialized boards to ensure the real-time execution of all the expected artificial intelligence and control tasks, while constantly maintaining extremely low consumption levels. In addition, the modular design adopted allows to dynamically activate and deactivate portions of the architecture to constantly ensure the lowest possible level of consumption according to the operational tasks actually performed.
4) A wide range of sensors that allow SunRazor to acquire detailed information on the surrounding environment in continuous mode and to carry out the assigned monitoring tasks. The information thus acquired is stored in the on-board ICT system, processed, filtered, analyzed and automatically transmitted to the ground stations whenever it is possible to establish a radio/satellite data connection. The drone platform (acquatic/aerial) is capable of using a variable payload of highly innovative sensors, through which it is possible to detect various chemical-physical variables inherent to the state of the SSA.The SunRazor’s mission planning system allows the scheduling of sampling events according to independent timelines that can affect a different number of variables. Furthermore, the drone can be programmed at any time to make immediate changes to pre-existing sampling missions, in order to perform urgent monitoring in specific areas of the monitored areas. The control system will automatically perform merge operations that converge the drone's operations towards the standard scheduling, once the management of high priority critical issues is finished.
SunRazor, in addition to being a device with a particular vocation for the acquisition of chemical-physical information on the composition of SSA, will also be able to detect information on the presence of biogenic biomolecular structure of numerous cyclic peptides, synthesized by marine organisms, which are increasingly proving to have anticancer activities*. One of the operational possibilities that we intend to put in place is also that of monitoring the risk of pollutants of a fossil nature in a highly sensitive ecosystem, so much that it is considered an indicator of the well-being of the planet, which is the Beagle Channel in Tierra del Fuego in Argentina.
*Sergey A. Dyshlovoy (2021), Recent Updates on Marine Cancer-Preventive Compounds. Marine Drugs, 19, 558. https://doi.org/10.3390/md19100558
Beach wrack is a term used to describe organic matter, e.g., aquatic plants, macroalgae, that is washed from the sea to shore due to wind, waves or floods. These organic matter accumulations are home to invertebrates, which in turn are food for animals in the higher food chain, such as seabirds. Algal accumulations also perform important coastal protection – dune stabilization by reducing the impact of wave energy and wind-induced sand transport processes.
From a socio-economic point of view, beach wrack accumulations are often considered an inconvenience, especially for tourists when large amounts are discarded on resort beaches. After storms, they can cover large areas of beach, begin to decompose, and emit unpleasant odors. In order to ensure proper conditions for tourists and have clean beaches, municipalities managing the beaches should clean them from decomposing organic matter, taking into account the accumulated organic deposits.
Beach wracks are essentially unpredictable and heterogeneous material, whereof different parts may be at different stages of decomposition. Because algae are often mixed with debris and large amounts of sand, they are expensive to manage and processing options are often difficult. Studies have shown that various plastic fractions are trapped in algae and thus the plastic is removed from the sea to shore.
In order to map and more accurately estimate the areas of algal deposits in time and space, it is recommended to use an unmanned aerial vehicle (drone), which can provide spatial information useful for studying small changes in space and time. Beach wrack mapping by drone has been successfully tested in Greece, but its accuracy and wrack content have not been evaluated (Papakonstantinou et al., 2016).
The main aim of this study is to estimate the amount of algal deposits and the plastic in it. This study is expected to:
1) Assess the area of wrack deposits in the four studied Lithuanian Baltic Sea beaches;
2) Apply the volume calculation method using the obtained virtual height models and subtracting different topographic surfaces from them, which will be validated with the algae heights measured with a ruler;
3) Estimate the probable amounts of plastic in the assemblies.
The research was carried out on four beaches: Melnragė, Karklė, Palanga and Šventoji. A DJI Inspire 2 drone with a Zenmuse X5S camera was used for the flights. The flights are performed at an altitude of 60 m, which allows to get high-resolution (up to 2 cm/pixel) photos. A RedEdge MX multispectral camera was used for an additional detection experiment.
The first flights and mapping were performed in 2020 August. From 2021 April 20 continuous monitoring started, which continued throughout the summer season. Monitoring was performed every 10 days (depending on weather conditions). Beaches were mapped only when wrack deposits were observed.
Expeditions are carried out at least once a month or when a big wrack is detected, during which the heights of wrack deposits are measured in situ and the coordinates of the measurements are recorded. This data were used to validate the height models obtained from the drone at the points. Expeditions also include sampling to determine the biomass and species diversity of macroalgae assemblages, and the amount of plastic items.
The photographs taken by the drone are combined into orthophotos and then transferred to a GIS program, in which automatic classification divides pixels into groups: water, sand and algal deposits. After machine learning step each area size is calculated. Volume is calculated from virtual elevation models.
Based on the biomass of macroalgae determined in the laboratory, the results are extrapolated and their volume is calculated for the whole area. Extrapolation is also performed with plastic, which is checked in different fractions: < 0.5 cm – micro, 0.5–2.5 cm – meso and > 2.5 – macro.
Observations showed the highest algal exudation was in Melnragė and Šventoji beaches (18 times out of 20 (90%) and 10 out of 14 (71%), respectively). Accumulations were detected 8 times out of 17 (47%) in Karklė and 4 out of 13 (31%) in Palanga.
This data and methodology could be used for the management of beach areas by detecting, calculating the amount of macroalgal biomass and associated plastic amount prior the decision making.
Antarctica is one of the most unique and important locations on Earth but also one of the most affected by climate change, which as a consequence is seeing the populations of the organisms that inhabit on it drastically reduced. Penguins play a fundamental role in the Antarctic ecosystem, since they occupy a middle position in the Antarctic food chain, so that the guano they excrete to the sea surface waters contains significant amounts of bioactive metals (e.g. Cu, Fe, Mn, Zn), acting as the basis for Antarctic primary production. In this way, small changes in Antarctic penguin populations lead to large changes in the ecosystem. That is the reason why the scientific community needs to monitor the evolution of the colonies of these organisms in the face of a global climate change scenario. Remote sensing has evolved as an alternative to traditional techniques in the monitoring of these organisms in space and time, especially with the irruption of the use of Unmanned Aerial Vehicles (UAVs) that provides a centimetric spatial resolution. In this research, we examine the potential of a high-resolution sensor embedded in a UAV, compared with moderate-resolution satellite imagery (Sentinel-2 Level 1 and 2 (S2L1 and S2L1) and Landsat 8 Level 2 (L8L2)), to monitor the Vapour Col Chinstrap penguin (Pygoscelis antarcticus) colony at Deception Island (Antarctica). The main objective is to generate precise thematic maps derived from the supervised analysis of the multispectral information obtained with these sensors. The results obtained highlight the UAV's potential as a more effective, accurate and easy-to-deploy tool, with higher statistical accuracies outperforming satellite imagery (93.82% Overall Accuracy in UAV data supervised classification against 87.26% Overall Accuracy in S2L2 imagery supervised classification and 70.77% Overall Accuracy in L8L2 imagery supervised classification). In addition, this study represents the first precise monitoring that takes place in this Chinstrap penguin colony, one of the largest in the world, estimating a total coverage of approximately 20000 m2 of guano areas. UAVs complement the disadvantages of satellite remote sensing in order to take a further step in the monitoring of Polar Regions in the context of a global climate change scenario.
In temperate regions of Western Europe, the polychaete Sabellaria alveolata (L.) build extensive intertidal reefs of several hectares on soft-bottom substrates. These reefs are protected by the European Habitat Directive EEC/92/43 as biogenic structures hosting high biodiversity and providing ecological functions such as protection against coastal erosion. Monitoring their health status is therefore mandatory. These reefs are characterized by complex three-dimensional structures composed of hummocks and platforms, either in development or degradation phases. Their high heterogeneity in physical shape and spectral optical properties is a challenge for accurate observation.
As an alternative to time-consuming field campaigns, an Unmanned Aerial Vehicle (UAV) survey was carried out over Noirmoutier Island (France), where the second-largest European reef is located in a tidal delta. Structure-from-motion (SfM) photogrammetry coupled with multispectral images was used to describe and quantify the reef topography and colonization by bivalves and macroalgae. A DJI Phantom 4 Multispectral UAV provided a very resolute and accurate topographic dataset at 5 cm/pixel resolution for the Digital Surface Model (DSM) and 2.63 cm/pixel resolution for the multispectral orthomosaic images. The reef footprint was mapped using the combination of two topographic indices: the Topographic Openness and the Topographic Position Index. The reef structures covered an area of 8.15 ha, with 89 % corresponding to the main reef composed of connected and continuous biogenic structures, 7.6% of large isolated structures with a projecting surface< 60 m², and 4.4 % of small isolated reef clumps < 2 m². To further describe the topographic complexity of the reef, the Geomorphon landform classification was used. The spatial distribution of tabular platforms considered as a healthy reef status (by opposition to a degraded status) was mapped with a proxy comparing the reef volume to a theoretical tabular-shaped reef volume. Epibionts colonizing the reef (macroalgae, mussels, and oysters) were also mapped by combining multi-spectral indices such as NDVI and simple bands ratio with topographic indices. A confusion matrix showed that macroalgae and mussels were satisfactorily identified but that oysters could not be detected by an automated procedure due to the complexity of their spectral reflectance.
The topographic indices used in this work should now be further exploited to propose a health index for these large soft-bottom intertidal reefs, monitored by environmental agencies in charge of managing and conserving this protected habitat. It is not known if these topographic methods are transferable to high resolution (0.4 to 0.8 m) stereo images from satellites such as Pleiades, Pleiades-neo, IKONOS, or Worldview solutions. Mapping from stereo-satellite images will be tested on the largest Sabellaria alveolata intertidal reef in Europe, in the bay of Mont Saint-Michel (France). This work will be done in the ESA project BiCOME (Biodiversity of the Coastal Ocean: Monitoring with Earth Observation).
Relying on computer vision techniques and resources, many smart activities have been possible in order to make the world safer and optimized on resource management, especially considering time and attention as manageable resources. Once the modern world is very abundant in cameras, including especialy the ones on security cameras and military-grade Unmanned Aerial Vehicles or even affordable UAV in which are becoming more common on society. Thus, automated solutions based on computer vision techniques to detect, monitor or even prevent relevant events such as robbery, car crashes and traffic jams can be accomplished and implemented for the sake of both logistical and surveillance improvements, between other contexts and one way to do so is by identifying abnormal behaviours performed by the vehicles on observed roads. In this paper is presented an approach for vehicles’ abnormal behaviours detection from highway in which the vectorial data of the vehicles’ displacement are extracted from images captured by a stationary quadcopter UAV and surveillance cameras. Two deep neural networks used in this paper. A deep convolutional neural network was employed to object detection and tracking. Then, a long-short term memory neural network is used to behaviour classification. The deep convolutional neural network is a YOLOv4 trained with images extracted from highway footage, and the vehicles' vectorial data is extracted from their tracking on footages to train the long-short term memory neural networks. The training of the behaviour discriminator, in order to classify the behaviours as normal or abnormal, takes account the fact that most vehicles on the streets performs normal behaviours. The abnormal class is given by being an outlier on the general behaviours' profile. The results show that the classifications of the given vehicles' behaviours have been consistent and the same principles may be applied on other trackables objects and scenarios as well.
Soil is one of the world’s most important natural resources for human livelihood as it provides food and clean water. Therefore, its preservation is of huge importance. Detailed soil information can provide the required means to aid the process of soil preservation. The project “ReCharBo” (Regional Characterisation of Soil Properties) has the objective to combine remote sensing, geophysical and pedological methods to derive soil characteristics and map soils on a regional scale. Its aim is to characterise soils non-invasive, time and cost efficient and with a minimal number of soil samples to calibrate the measurements. Hyperspectral remote sensing is a powerful and well known technique to characterise near surface soil properties. Depending on the sensor technology and the data quality, a wide variety of soil properties is derivable with remotely sensed data. Properties such as iron, clay, soil organic carbon and CaCO3 can be detected. In this study drone-borne hyperspectral imaging data in the VNIR-SWIR spectral region (400-2500 nm) was acquired over non-vegetated agricultural fields in Germany. In addition, field spectra were taken at several sample locations throughout extensive field campaigns. Soil samples from these locations were used for pedological analyses and spectral measurements in the laboratory following a proposed Internal Soil Standard measurement protocols by IEEE P4005 activities. The laboratory spectra is used to develop methods to predict soil properties to transfer these method to the field and drone-borne data. The prediction methods incorporate the analysis of spectral features and therefore the physical relationships between the reflectance spectra and the soil properties as well as Partial Least Square Regression (PLS) which are widely used to quantify soil properties from hyperspectral data. A further objective is to investigate uncertainties regarding soil parameter retrieval depending on the scale and method of measurement. For the spectral measurements in the laboratory the soil samples are dried, crushed and sieved. The UAV borne data however is influenced by soil moisture, surface roughness, atmospheric and illumination effects. These effects lead to differences in the accuracy for the estimation of soil parameters. The results are presented and critically discussed in the context of soil mapping.
Shrubification of arctic tundra wetlands alongside with changes in the coverage and volume of lichens are two well-documented processes in the Fennoscandian tundra. A rapidly warming climate and changes in reindeer grazing patterns are driving shifts in the carbon feedbacks and altering local microclimate conditions. The growth in arctic deciduous shrubs has been documented, and its effects on ecosystem function and structure may range from a greater release of soil carbon to alterations in the local ecohydrology. It is therefore of upmost importance to closely monitor these changes in order to gain a complete understanding of their dynamics and improve the adaptive capacity of the regions under study. In this regard, earth observation data has played a key monitoring role during past decades. However, the fine scale of these processes often renders them invisible or hazy under the eye of satellite sensors. On the other hand, the rapid growth of Unmanned Aerial Systems and sensor capabilities opens new opportunities for mapping and monitoring.
Here, we present a toolset of Unmanned Aerial Systems and Machine Learning algorithms that enables highly accurate monitoring of landcover change dynamics in the sub-arctic tundra. The study area is located in the Fennoscandian oroarctic tundra zone, between the Finnish-Norwegian border. In the mid 1950s, a reindeer fence was built along the border, thus separating two different reindeer grazing strategies. While reindeer graze only during winter in the Norwegian side, grazing occurs all year round in the Finnish side, with reindeer feeding on the new shoots of willows (Salix spp.) and therefore containing the shrubification process.
In order to study the long-term impacts of differential grazing on willow extent and growth, we surveyed the study area with a Sensefly Ebee and a DJI Matrice 200 equipped with a Parrot Sequoia 1.2 megapixel monochromatic multi-spectral sensor, a senseFly rgb S.O.D.A and a FLIR Thermal Imaging kit respectively. We combined multispectral, photogrammetric and thermal data with an ensemble of machine learning algorithms to map the extent of woody shrubs and quantify their above-ground biomass at two wetlands across the Finnish-Norwegian border. Furthermore, we used the same toolset to map topsoil moisture and water table depth, two parameters strongly influenced by the encroachment of willow bushes in subarctic wetlands. The set of algorithms under scrutiny were a pixel-based Random Forest and the more recent XGBoost. The ensemble of algorithms was trained with a comprehensive set of in-situ data collected at the study sites, including plant species composition, above ground biomass, topsoil moisture, water table depth and depth of the peat layer. The validation of results showed a high degree of accuracy, with R2 > 0.85 for biomass prediction and overall accuracy > 80% for plant community distribution maps. The results show a clear expansion of willows in the Norwegian side of the border, alongside a strong increase in the above ground biomass.
The high degree of accuracy obtained in the results unfolds new research prospects, such as the combination of fine-scale remote sensing with chamber and Eddy Covariance measurements to quantify the impact of land cover on the carbon and energy balance. The use of Unmanned Aerial Systems could also help unveil the complexity of greening and browning patterns in the arctic.
Digital terrain models (DTMs) are important for many environmental applications including hydrology, archaeology, geology, and the modelling of vegetation biophysical parameters such as above ground biomass (AGB) and vegetation height. The quality of a DTM depends on a number of factors including the method of data collection, with topographic surveys being considered as the most accurate DTM generation method. However, the logistical costs associated with conducting large-scale topographic surveys has seen a gradual decrease in their use for generating DTMs and newer technologies based on remote sensing have emerged. This study investigated the potential of utilizing terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) photogrammetric point cloud data for generating DTMs in an area comprising a mixture of grass and dwarf shrubland vegetation near Middelburg, Eastern Cape, South Africa. An area covering approximately 13 200 m2 was surveyed using the Riegl VZ-1000 TLS instrument and the DJI Phantom 4 Pro drone. The TLS and UAV datasets were then co-registered into a common coordinate system using Real Time Kinematic Global Navigation Satellite System (RTK‐GNSS) reference measurements to yield overlapping point clouds in RiScan Pro 2.8 and AgiSoft Metashape version 1.6.1 softwares respectively. LASTools® point cloud processing software was subsequently used to compute DTMs from the georeferenced TLS and UAV datasets and independently collected checkpoints obtained from 8 TLS scan positions were used to validate the accuracy of the TLS and UAV-derived DTMs. The results from the study showed that DTMs generated from UAV photogrammetric point cloud data were comparable in accuracy to those generated from 3D TLS data, despite TLS-derived DTMs being slightly more accurate. This finding suggests that UAV photogrammetric point cloud data could be used as a cost-effective alternative to produce reliable estimates of surface topography in areas with short vegetation (maximum height less or equal to 2 m) and less complex terrain.
Technological developments in the agricultural sector will change the cultivation structure towards small-scale fields accounting for heterogeneities in soil texture, topography, distance to surface waters etc. The overall aim is to reduce the impacts to the environment and to increase the biodiversity by simultaneously keeping high yields. Autonomously operating ground vehicles (robots) and aerial vehicles (drones) will collaboratively monitor fields and provide optimized cultivation of the fields while considering local weather predictions.
However, this is a future perspective, we are not there yet. We will present first experiments with an autonomous Unmanned Aerial System (UAS) for precision agricultural monitoring. The system consists of an air-conditioned hangar, which protects it from criminal acts and weather conditions, and charges the drone between flights. Beyond visual line of sight (BVLOS) operations are possible, which increases flexibility and reduces human interactions. Multispectral and thermal infrared observations are able to provide adequate spatiotemporal data on plant health and water availability. This information can be used for agricultural management and intervention, such as irrigation. We provide example applications for the TERENO (Terrestrial Environmental Observatories) site Selhausen, not far from Bonn, Germany.
Mangroves provide multiple ecosystem services in the intertidal zone of tropical and subtropical coastlines and are among the most efficient ecosystems at storing carbon dioxide. For several decades, remote sensing has been applied to map mangrove distribution and their biophysical properties, such as leaf area index (LAI), which is one of the most important variables for assessing mangrove forest health. However, remote sensing of mangrove LAI has traditionally been relegated to coarse spatial resolution sensors. In the last few years, unmanned aerial vehicles (UAVs) have revolutionised mangrove remote sensing. Nevertheless, the myriad of available sensors and algorithms makes it difficult to properly select a suitable methodology to map their extent and LAI.
In this work we performed a multi-sensor (i.e. Landsat-8, Sentinel 2, PlanetScope and UAV-based MicaSense RedEdge-MX) comparison and evaluated the performance of various machine-learning algorithms (i.e. classification and regression trees (CART), support vector machine (SVM) and random forest (RF)) for mangrove extent mapping in a Red Sea mangrove forest in Saudi Arabia. The relationship between several vegetation indices and LAI measured in-field was also evaluated. The most accurate classification of mangrove extent was achieved with the UAV data using the CART and RF algorithms, with an overall accuracy of 0.93. While the relationships between field-derived LAI measurements and satellite-based vegetation indices produced coefficients of determination (r2) lower than 0.45, the relationships with UAV-based vegetation indices produced r2 up to 0.77. Selecting the most suitable sensor and methodology to assess mangrove environments is key for any program aiming to monitor changes in mangrove extent and associated carbon stock, particularly under the current scenario of climate change, and the results of this work can help on this task.
Assessing the effects of forest restoration is key to translating advances in restoration science and technology into practice. It is important that forest management learns from the past and adapts restoration strategies and techniques in response to changing socio-economic and environmental conditions (Bautista and Alloza, 2009). However, evaluating restoration over time is a complex task. It requires the measurement of variables that reflect the ecological quality of the systems under restoration in a quantifiable way, so that the process and its changes can be analysed on an objective basis (Ocampo-Melgar et al., 2016). When restoration includes active restoration work, such as planting, monitoring should be based, among other things, on the measurement of attributes of the vegetation planted, as well as the effects of the vegetation on the environment.
One variable measured is the response of the planted vegetation, assessed as survival and growth. This is of interest as it occurs at a rate that makes it possible to distinguish significant changes over short periods of time. On the other hand, it is also interesting to know the response of the introduced vegetation because this vegetation will affect properties of the system under restoration in the longer term. Monitoring this response, albeit in the short term, will make it possible to anticipate the transforming capacity of this vegetation. All this analysis has motivated the development and exploitation of new methods for calculating parameters that analyse the monitoring of a plantation.
In this context, the development of new vegetation monitoring methodologies based on the capture of information with unmanned aerial vehicles (UAV) has become very attractive for improving the characterisation and monitoring of vegetation.
The general objective of this study is the development of an applied technology service for the monitoring of reforestation, characterising the structure of the reforestation, its growth and mortality. The methodology developed involves the planning of data acquisition using RGB (red green blue) and NIR (near infrared) cameras on board low-cost UAV platforms, and the processing of the images obtained.
The study has been carried out in a eucalyptus plantation in Huelva (Andalusia, Spain) where it is necessary to identify the plants in the shortest possible time so that they can be replaced in the months after planting.UAV flight planning was carried out at different months after planting, with both types of cameras, with and without NIR channel, and at different flight heights. The identification of dead trees after 2 months of planting was only possible with cameras incorporating near infrared, and from 4 months onwards at a height of 100m.
Stones on agricultural land can cause serious damage to agricultural machinery, when they are getting inside the machinery. This phenoma is especially pronounced in regions with high frequency of stones occurring on agricultural lands, e.g. in glacial morainic landscapes, as they occur in northern Germany. Therefore, stones must be removed from farmland several times a year. A worfklow for drone-based detection of stones is currently under development at the Geoecology department of MLU Halle to assist solving this problem.
With our workflow, we demonstrate the particular suitability of UAS-based thermal data to differentiate between stones and soil surface on agricultural lands. Thermal inertia effects can be used to make significant temperature differences between stone and soil detectable. Which enables precise stone detection through UAS based thermal imaging. We have conducted extensive laboratory testing to investigate the suitability of thermal imaging to detect stones and to find the optimal pre-requisite for thermal UAV flights. We selected the most important variables that have an high impact on the thermal detectability and thus analyzed the influence of soil moisture, air temperature, wind and radiant heatto evaluate thermal detectability by DJI Zenmuse H20T camera.
Within our laboratory experiment we used two identical plastic boxes insulated to the side and bottom with styrofoam and filled with about 40 cm of soil. A total of 4 stones of different sizes were placed on top of the soil. In the center a black aluminum plate was placed for the calibration of the thermal data(see figure 1). The temperatures were simultaneously monitored with a 4 channel logger (PerfectPrime TC0520) attached with 4 RS PRO thermocouples type T (temperature range from -75 °C to +250 °C, IEC 584-3, tolerance class 1).
The following experiments were performed in a clima chamber experiment to account for the different influcening factors:
Scenario No. Temperature Soil moisture Duration Note
1 10°C to 17°C
3,2 % Vol.
7 hours (1°C / hour increased)
2 10°C to 17°C
26,2 % Vol.
7 hours (1°C / hour decreased)
3 constant 17°C 2,6 % Vol. 4 hours 2x 350 W radiators for 2 hours direct irradiation of the examination objects
4 constant 17°C 28,4 % Vol. 4 hours 2x 350 W radiators for 2 hours direct irradiation of the examination objects
By means of radiometric thermal imaging camera attached to a DJI M300 RTK platform imagery data was acquired. The spectral range of the camera is 8-14 μm, the focal length is 13.5-mm, and the sensor resolution is 640 × 512 pixels.. The thermal imaging camera captured an image of the test objects (cf. Fig.1) every minute. The thermal images are stored in a proprietary format and subsequently converted into 8-bit unsigned TIFF files using the DJI Thermal SDK software. The output files were processed to receive text files in the format of X- & Y-image coordinate and temperature. Statistical analysis of the laboratory data were conducted using programming language R and packages raster, rgdal and pracma.
The results show that there are significant temperature differences between stones and soil in the time course of an average temperature scenario during typical stone harvest periods between October and February. The experiment revealed that the factor of soil moisture significantly influences detectability. Likewise, the factor of radiant heat has a significant influence on the detectability of temperature differences between stones and soil.
Based on these insights from standardized laboratory conditions the next steps will focus on the investigation of these approaches under real conditions in the field. The results from the experiment show a great theoretical potential to detect stone by means of thermal UAV imagery and thus this will be evaluated under field conditions in the following month. At the ESA LPS we would like to show up the results of the laboratory experiment and hope to substantiate these with latest information from successfully conducted field experiments during the winter months.
Informal settlements host around a quarter of the global population according to UN-Habitat. They exist in urban contexts all over the world, in various forms and typologies, dimensions, locations and by a range of names (squatter settlements, favelas, poblaciones, shacks, barrios bajos, bidonvilles, slums). While urban informality is more present in cities of the global south, housing informality and substandard living conditions can also be found in developed countries. These areas have common characteristics, including the deprived of access to safe water, acceptable sanitation, lack of health security and durable housing; in addition to being areas that are overcrowded and lack land tenure security. Such settlements are usually located in suburban areas, isolated from the core urban activities. The mapping of the urban form in such cases is a challenging task mainly due to their complexity, and their diverse and irregular morphology. Earth Observation plays a significant role in the mapping and monitoring of the extend, structure and expansion of such areas. Despite the increasing availability of data of very high resolution, standard methodological approaches usually fail offer high quality baseline data that can be used in urban surface and climate models, due to the aforementioned complexity (density of temporal buildings, mixing of materials used in the settlements, low height constructions). Here we present the first attempt to delineate the urban form of the slum of Mukuru in Nairobi, Kenya using Unoccupied Aerial System (UAS) data. Information from the slum, such as number of buildings and heights, density of structures, vegetation cover and height of high vegetation, digital surface model (DSM) and digital terrain model (DTM) are to our knowledge unavailable and consist of the main objectives of our approach. The above mentioned are usually the minimum spatial input requirements used in neighborhood-scale urban climate models such as the Surface Urban Energy and Water Balance Scheme (SUEWS). Data collection was performed in February 2021 covering an area of 4km2 using the Wingtra WingtraOne VTOL, a UAS system equipped with a fixed-lens 42 MP full-frame camera (Sony RX1R II) and accuracy of less than 2 cm using PPK. The images have been processed with the Wingtra application for the PPK corrections. The analysis of the imagery was run in Agisoft Metashape for the creation of the basic products: a) orthoimagery and b) DSM. The orthoimagery has been further analysed to derive a detailed five-classes (paved, buildings, high – low vegetation, base soil and water) landcover (LC) of Mukuru using a Random Forests classification algorithm developed using EnMAP toolbox in QGIS. The DSM product has in turn been exploited to derive a bare surface model (digital terrain model – DTM) following an approach based on a filtering method using moving window algorithm. DTM is the major input to create the normalized DSM (nDSM) as an intermediate step, in order to derive the heights of buildings and other objects (i.e., vegetation). The LC achieved an overall accuracy of 91.5%, with a class-wise accuracy of 1) Buildings at 90.16%, 2) Low and High Vegetation at 89.8%, 3) Bare Soil at 85% and 4) Water at 100%. In absence of GCPs from Mukuru slum, no validation was possible in the DSM and DTM products; GCP data collection was planned to run in summer 2021 but due to COVID19 situation and other safety reasons, up to now such data are not yet available. However, since the initial data has been corrected using PPK, we do not expect large errors in the elevation values of the landscape. Further analysis in the products of the building and vegetation heights shows over/underestimations of heights in areas with abraded changes in slopes such as the riverbanks. This is due to the methodology of the data collection process with the UAS, where, while the overlap was adequate in general, the use of a 3D grid for data collection would support the avoidance of errors in slopy areas. This study is the first one for the slum of Mukuru aiming at extracting the urban form and support the local microclimate modelling of the area using Urban Canopy Models (UCM) such as SUEWS.
Effective data collection and monitoring solutions for geohazard applications can be technically and logistically challenging, due to instrumentation requirements, accessibility, and health and safety considerations. Uncrewed Aerial Vehicles (UAV), which overcome many of the aforementioned challenges, have become valuable data collection tools for geoscientists and engineers, providing new and advantageous perspectives for imaging. UAV may be deployed to gather data following natural disasters, to map geomorphological changes, or to monitor developing geohazards. UAV-enabled data collection methods are increasingly used for investigating, modelling, and monitoring of geohazards and have been adopted by geo-professionals in practice. Geoscientific research that utilizes UAV sensing methods include examples where the data collected can also be used to reconstruct scaled, georeferenced, and multi-temporal 3D models to perform advanced spatio-temporal analyses.
In a series of Norwegian case studies presented by the authors, UAV-based remote sensing methods, including well-established techniques, such as Structure-from-Motion photogrammetry, were utilized to generate high-resolution, three-dimensional surface models in remote, steep, or otherwise inaccessible terrain. In a first case study, a full-scale experimental avalanche was monitored with UAV technology. Photogrammetric reconstructions of approximately 500 airborne images, which relied on a combination of real-time-kinematic (RTK) positioning and a limited number of ground control points, were used to estimate total mobilized snow volume, while orthomosaics provided high-resolution overviews of the avalanche path before and after the event. Additional UAV surveys were performed over the same area in a baseline condition, i.e. without any snow cover, to derive a snow cover map of the path and surrounding valley. Geospatial and statistical analyses were performed to assess the quality of the UAV-derived products and to provide comparison for coarser resolution Airborne Laser Scanning (ALS) data.
In another case study, a rock wall failure occurred along a major highway shutting down two lanes of traffic for an extended period of time, while the road authority inspected and repaired the wall. UAV survey imagery, combined with multi-temporal, ground-based images, were used to reconstruct a high-resolution digital surface model before and after the failure. The model was used to estimate the volume of rock and for joint stability assessments of the wall surrounding the failure. In another study, a multiband near-infrared camera was used to survey a heavy metal contaminated shooting range from the air. The images were fused with point cloud data and analysed using spectral indices and unsupervised classification algorithms to derive a high-resolution vegetative cover map. In yet another example, rainfall-induced debris flows were mapped, and erosion volume was assessed using UAV-derived data. Finally, the authors will report on preliminary findings from GEOSFAIR – Geohazard Survey from Air, a national Innovation Project for the Public Sector, led by the Norwegian Public Roads Administration. One of the aims of the GEOSFAIR project is to test emerging sensors, such as UAV-borne LiDAR, near- and longwave-infrared imagers, and ground-penetrating radar sensors, for roadside UAV operations and snow avalanche warning services.
Coastal environments benefit from the movement and exchange of nutrients facilitated by water flows. While this process is important for mangroves, seagrass patches, and coral reefs found in tropical coastal environments, water flows can also play a major role in the detection and tracking of pollutants, conservation efforts, and applications of aquatic herbicides for managing submerged plants. Monitoring of water flows is difficult due to their complex and temporally dynamic movement. The domain of high frequency or continuous tracking of dynamic features such as water flows has previously been limited to in situ monitoring installations, which are often restricted to small areas or remote sensing platforms such as aircrafts, which are generally prohibitively costly. However, unmanned aerial vehicles (UAV) are suitable for flexible deployment and can provide monitoring capabilities for continuous data collection. Here, we demonstrate the application of a UAV-based approach for tracking coastal water flows via fluorescent dye (Rhodamine WT) released in two shallow-water locations in a coastal tropical environment with mangroves, seagrass patches, and coral reefs along the shores of the Red Sea. UAV-based tracking of the dye plumes occurred over the duration of an ebbing time. Within the first 80 min of dye release, red-green-blue UAV photos were collected at 10-second intervals from two UAVs, each hovering at 400 m over the dye release sites. Water samples for assessment of dye concentration were also collected within 80 min of dye release in 30 different locations and covered concentrations ranging from 0.65 - 154.37 ppb. As the dye plumes dispersed and hence covered larger areas, nine UAV flight surveys were subsequently used to produce orthomosaics for larger area monitoring of the dye plumes. An object-based image analysis approach was employed to map the extent of the dye plumes from both hovering UAV photos and the orthomosaics that were geometrically corrected based on GPS-surveyed ground control points and radiometrically corrected based on black, grey, and white reflectance panels. Accuracies of 91 – 98% were achieved for mapping dye plume extent when assessed against manual delineations of the dye plume perimeters. UAV data collected coincidently with the water samples were used to predict dye concentrations throughout the duration of the ebbing tide based on regression analysis of band indices. The multiplication and the red:green and red:blue ratios provided a best-fit regression between the 30 field observations of dye concentration and the 30 coincident UAV photos collected while hovering with a coefficient of determination of 0.96 and a root mean square error of 7.78 ppb. The best-fit equation was applied to both the hovering UAV photos and the orthomosaics of the nine UAV flight surveys to detect dye dispersion and the movement of the dye plume. At the end of the ebbing tide, the two dye plumes covered an area of 9,998 m2 and 18,372 m2 and had moved 481 m and 593 m, respectively. Our results demonstrate how a UAV-based monitoring approach can be applied to address the lack of understanding of coastal water flows, which may facilitate more effective coastal zone management and conservation.
Agricultural fields are seldom completely homogenous. Soil, slope, and previous management decisions can influence the conditions under which a crop grows and determine its nutritional needs. However, in current farming situations in Switzerland, fertilizer is still spread mostly subjectively, according to the knowledge of the field manager. It is crucial that fertilizer is applied at the right time and in the right place. This prevents over-fertilization of the field, fertilizer run-off, and saves fertilizer. Variable rate technology (VRT) can help to apply fertilizer according to the actual need of the plants. VRT can be based on field imagery as input for fertilizer calculation - this imagery can be obtained with hand or tractor mounted sensors, UAVs or satellites. However, VRT in combination with sensors is very expensive. It is estimated that the use of VRT and sensors only pays off once a certain threshold of heterogeneity in the field is reached. The profitability of VRT systems also varies depending on the cost of sensor technology which is used. UAV-based field imagery is available at a very high spatial resolution of a few centimetres, but the costs for the flight missions are considerably high. Satellite-based data comes at little or no cost at all, however, the spatial resolution is much lower, this can cause errors especially in small scale fields. Overall, data on field heterogeneity is scarce, especially in the context of spatio-temporal changes throughout the vegetation season. Further, it is unclear, which spatial resolution is needed to capture the in-field variability reliably in small scale fields. In this contribution, first results of comparing spatio-temporal dynamics of field heterogeneity between high spatial resolution and low spatial resolution are introduced. A fixed-wing UAV (WingtraOne) was regularly flown over a small rural area in Switzerland at relevant times of the vegetation period over 2.5 consecutive years. Fixed-wing UAVs manage to cover 50 to 100 ha in one flight and are thus ideal for these studies. The study area included a diverse set of crops, ranging from winter wheat, canola, maize, sugar beet, sunflower to grassland and vegetables. The drone was equipped with different cameras: a high-resolution RGB camera (Sony RX1RII, 42 megapixels) and 3 different multi-spectral cameras (Red-Edge M, Red-Edge MX and Altum, all by Mica Sense). All multispectral cameras captured data in at least 5 bands of the RGB and near infrared spectrum (Altum also collected thermal data), which were used to calculate vegetation indices to assess crop health status. The spatial resolution of 0.7 to 1.2 cm (RGB) and 6 to 8 cm (multispectral) offered a very highly resolved dataset which was then used to investigate the field heterogeneity on various spatial scales. Soil maps and field book data of the respective farm managers complemented the dataset.
Unmanned Aerial Systems (UASs) deal with many limitations in acquiring reliable data in the marine environment, mostly because of the prevalent environmental conditions during a UAS survey. These limitations refer to parameters like weather conditions (e.g., wind speed, cloud coverage), sea-state conditions (e.g., wavy sea-surface, sunglint presence), and water column parameters (e.g., turbidity). Parameters like them affect the quality of the acquired data and the accuracy and reliability of the retrieved information.
In this study, we present a toolbox that overcomes the UAS limitations in the coastal environment and calculates the optimal survey times to acquire marine information. The UASea toolbox (https://uav.marine.aegean.gr/) identifies the optimal flight times in a given day for an efficient UAS survey and the acquisition of reliable aerial imagery in the coastal environment. It gives hourly positive or negative suggestions about the optimal or non-optimal UAS acquisition times to conduct UAS surveys in coastal areas. The suggestions are derived using weather forecast data of weather variables and adaptive thresholds in a ruleset. The parameters that have been proven to affect the quality of UAS imagery and flight safety have been used as variables in the ruleset. The proposed thresholds are used to exclude inconsistent and outlier values that may affect the quality of the acquired images and the safety of the survey. Considering the above, the ruleset is designed in such a way that outlines the optimal weather conditions, suitable for reliable and accurate data acquisition as well as for efficient short-range flight scheduling.
UASea toolbox has been developed as an interactive web application accessible through the internet from modern web browsers. It is designed using HTML and CSS scripts while JavaScript augments the user experience and user interactivity through mouse events (scroll, pan, click, etc.). To identify the optimal flight times for marine mapping applications, the UASea toolbox uses short-range forecast data. In this context, we use a) Dark Sky (DS) API (Dark Sky by Apple, https://darksky.net/) for two days of forecast data on an hourly basis and b) Open Weather Map (OWM) API (Open Weather Map, https://openweathermap.org/) five days forecast with three-hour step. Users may navigate to a map element by zooming in/out and panning to the desired location and selecting the study area by clicking the map. A leaflet marker triggers an ‘Adjust Parameters’ panel that consists of an HTML form in which users can adjust the parameters and their thresholds and select one of the available weather forecast data providers. After the adjustment, a decision panel becomes available at the bottom of the screen. At the top of the decision panel, there is a date menu that is used to address the range of the available forecast data, while on the bottom of the decision panel the results of the UASea toolbox are presented in tabular format. In the ‘Decisions’ row, the green color indicates optimal weather conditions, while the red color stands for non-optimal weather conditions.
The performance of the UASea toolbox has been tested and validated in different coastal areas and environmental conditions, through image quality estimates and classification accuracy assessment analysis. The quality and accuracy assessment validated the suggested acquisition times of the UASea, resulting in significant differences between the data acquired in optimal and non-optimal conditions in each site. The results showed that most of the positive toolbox suggestions (optimal acquisition times) match the images with the higher quality. The validation of the toolbox proved that UAS surveys on the suggested optimal acquisition times result in high-quality images. In addition, the results confirmed that a more accurate image classification can be achieved during optimal flight conditions.
UASea is a user-friendly and promising toolbox that can be used globally for efficient mapping, monitoring, and management of the coastal environment, by researchers, engineers, environmentalists, NGOs for efficient mapping, monitoring, and management of the coastal environment, for ecological and environmental purposes, exploiting the existing capability of UAS in marine remote sensing.
Mixed-species forests can host greater species richness and provide more important ecosystem services compared to monocultures of conifers. In boreal environments, particularly old deciduous trees have been recognized to promote species richness. Accurate identification of tree species is thus essential for effective mapping and monitoring of biodiversity and sustainable forest management. European aspen (Populus tremula L.) is a keystone species for the biodiversity of boreal forest. Large-diameter aspens maintain the diversity of hundreds of species, many of which are threatened in Fennoscandia. Majority of the classification studies so far focused on the dominant tree species, with fewer studies on less frequent but ecologically important species. Due to a low economic value and relatively sparse and scattered occurrence of aspen in boreal forests, there is a lack of information of the spatial and temporal distribution of aspen.
In this study, we assessed the potential of an RGB, Multispectral (MSP) and Hyperspectral (HS) UAS-based sensors and its combination for identification of European aspen at individual tree level using different combinations of spectral and structural features derived from high-resolution photogrammetric RGB and MSP point clouds and HS orthomosaics. Moreover, we included a standing deadwood as a separate class into the classification analysis to assess the possibilities to recognize it among the main tree species, because along with aspens, standing deadwood plays a significant role in maintaining biodiversity in a boreal forest.
We aimed to find out if a single sensor solution is more efficient than the combination of multiple data sources for efficient planning and implementation of sustainable forest management practices using the UAS-based approach. Experiments were conducted using >1000 ground measured trees in a southern boreal forest mainly consisting of Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), silver birch (Betula pendula) and downy birch (Betula pubescens L.) together with 200 standing deadwood trees. The proposed method provides a new possibility for the rapid assessment of aspen occurrence to enable more efficient forest management as well as contribute to biodiversity monitoring and conservation efforts in a boreal forest.
In addition to crop productivity, food quality traits are of high importance for farmers and a major factor affecting end-use product quality and human health. Food quality has been specifically identified among the United Nations Sustainable Development Goals (SDGs) as a key component of Goal 2 Zero Hunger, to end hunger in part through improved nutrition. Durum wheat is one of the most important cereal grains grown in the Mediterranean basin where the strong influence of climatic change complicates agricultural management and efforts to develop environmentally adapted varieties with higher yields and also improved quality traits. Protein content is among the most important wheat quality features, nonetheless in the last decades a reduction in durum wheat protein content has been observed associated with the spread of high yielding varieties. Therefore, it is central to develop efficient quality-related phenotyping and monitoring tools. Predicting not only yield but also important quality traits like protein content, vitreousness, and test weight in the field before harvest is of high value for breeders aiming to optimize crop resource allocation and develop more resilient crops. Moreover, the relation between grain protein and nitrogen fertilization plays a central role in the sustainability of agriculture management, again connecting these efforts to the SDG 2.
In this study, we take a two-pronged approach towards improving both yield quantity and grain quality estimations of durum wheat across Spain. With this aim in mind, we brought together the confluence of crop phenotyping and precision agriculture through incorporating genetic, environmental and crop management factors (GxExM) at multiple scales using different remote sensing approaches. Aiming to develop efficient phenotyping tools using remote sensing instruments and to improve field-level management for more efficient and sustainable monitoring of grain nitrogen status, the research presented here focuses on the efficacy of multispectral and high resolution visible red-green-blue (RGB) imaging sensors at different scales of observation and crop phenological stages (anthesis to grain filling).
Linear models were calculated using vegetation indices at each sensing level, sensor type and phenological stage for intercomparisons of sensor type and scale. Then, we used machine learning (ML) models to predict grain yield and important quality traits in crop phenotyping microplots using 11-band multispectral UAV image data. Combining the 11 multispectral bands (450 ± 40, 550 ± 10, 570 ± 10, 670 ± 10, 700 ± 10, 720 ± 10, 780 ± 10, 840 ± 10, 860 ± 10, 900 ± 20, 950 ± 40 nm) for 34 cultivars and 16 environments supported the development of robust ML models with good prediction capability for both yield and quality traits. Applying the trained models to test sets explained a considerable degree of phenotypic variance at good accuracy with R2 values of 0.84, 0.69, 0.64, and 0.61 and normalized root mean squared errors of 0.17, 0.07, 0.14, and 0.03 for grain yield, protein content, vitreousness, and test weight, respectively.
Following these findings, we modified our UAV multispectral sensor to match Sentinel-2 visible and near-infrared spectral data bands in order to better explore the upscaling capacities of the grain yield and protein linear models. Specifically, models built at anthesis with UAV multispectral red-edge band data performed best at grain nitrogen content estimation (R2=0.42, RMSE=0.18%), which can be linked to grain protein content. We also demonstrated the possibility to apply the UAV-derived phenotyping models to satellite data and predict grain nitrogen content for actual wheat fields (R2=0.40, RMSE=0.29%). Results of this study show that using ML models of multispectral UAV can be a powerful approach to efficiently predict important quality traits and yield preharvest at the micro-plot level in phenotyping trials. Furthermore, we demonstrate that phenotyping microplot-based grain quality and grain yield prediction models are amenable to Sentinel-2 satellite precision agriculture applications at larger scales, representing an effective synergy based on the inherent scalability of remote sensing for assessing plant physiological primary and secondary traits.
Unoccupied aerial vehicles (UAV) are increasingly being used as a tool for retrieving environmental and geospatial data. The scientific applications include mapping and measuring tasks, such as surveying ecosystems and monitoring wildlife as well as more complex parameter retrieval, for example flow velocity measurements in rivers or products derived from UAV based Lidar measurements. Hence, UAVs are used to collect data across many environmental science disciplines, land management and also for commercial applications. Depending on the use and research question, different sensors are mounted on the UAV and areas of interest (AOI) of varying coverage and a diverse range of timeframes of interest (TOI) are captured during surveying flights. For that reason, the resulting datasets are very heterogeneous and in joint research projects even dispersed over a number of institutions and research groups. While the outcomes of the analyses are published in the relevant journals of the respective disciplines, the underlying raw data are rarely publicly accessible, despite often being publicly funded. Although the high spatial resolution of UAV-derived information can help to close the scale-gap between ground observations and large-scale observations provided by the Sentinels, UAV data cannot yet be explored jointly together with Sentinel data at large scales because UAV data is not systematically catalogued and stored.
For reuse and valorisation of already existing datasets, as well as the planning of further research projects it would be useful for scientists to be able to find the before mentioned UAV data. Pivotal for developing any project in an area under investigation are questions as to whether data exist for the area, if they are available for further use, and which data products already have been generated from that data.
Here, we aim to solve these issues by developing and testing a data platform that to facilitate the exchange of UAV images and data between projects and institutions. The Open Drone Portal (OpenDroP) is a data model with a web application that serves as a data catalogue for the registration of UAV images that includes mandatory criteria such as product type, AOI and TOI. In addition, it is also possible to record additional, UAV-specific metadata, such as the UAV platform, the mounted sensor model and type or detailed georeferencing information. This kind of metadata does not appear in other common search solutions and catalogue standards. To facilitate finding data in OpenDroP by thematic focus, the referenced records can be tagged by the user.
If the scientific evaluation of the data has already been published, the publication data can be supplemented to show how the data has been transferred into domain-specific products. If only metadata is provided in the database users have the possibility to contact the data provider. However, during the creation of the database entry, users can also provide a download link and indicate under which licence the data may be used.
The application will be accessible to all users. A first demonstrator application offers the possibility to test research and publication functions based on freely available data (https://opendrop.de/application).
Grasslands belong to the most diverse land systems of Europe, covering gradients from intensively managed annual grasslands to natural meadows without management. As detailed information on grassland use intensity in Europe is sparse, spatiotemporally explicit information on the vegetation and its dynamics are needed to develop sustainable management pathways for grasslands. On the one hand, Unmanned Aerial Vehicles (UAV) have great potential for providing high-resolution information from field- to farm-scale on key phenological dates. Sentinel-2 data, on the other hand, allow for frequent, continuous global monitoring delivering data at 10m spatial resolution. Therefore, nested approaches combining UAV with Sentinel-2 offer yet unexplored potential for monitoring grasslands.
Beyond commonly used vegetation indices, time series of biophysical parameters such as biomass, leaf area index, or vegetation height provide physical measures of grassland productivity or vegetation structure for assessing grassland resources over time. Timely information of such parameters directly supports land management decisions of farmers and serves as a basis for designing and evaluating policies. UAV and Sentinel-2 have high suitability for estimating biophysical parameters.
In this study, we therefore present an upscaling approach combining UAV and Sentinel-2 data for improved grassland monitoring based on time series of aboveground biomass. We investigate the potential of combining UAV and Sentinel-2 data in contrasting grasslands including an extensively grazed upland pasture and intensively managed lowland meadows. We use a two-step modelling approach incorporating ground-, UAV-, and satellite-scales. We first derive maps of biomass information from UAV images and ground-based data covering the complete phenological development of grasslands from April to September. Subsequently, we use the high-resolution UAV-based maps from multiple dates in a global machine learning model to estimate intra-annual biomass time series from Sentinel-2 data. First results show that UAV-based maps capture fine-scale spatial patterns of biomass accumulation and removal before and after grazing periods. The Sentinel-2-based time series reproduce vegetation dynamics related to management periods in the two contrasting study areas. Our study demonstrates the potential of combining high-resolution UAV and Sentinel-2 data for establishing monitoring systems for grassland resources. More research is needed to enable multi-scale monitoring of biophysical quantities across different grassland regions in Europe.
Agricultural spraying drone based on centrifugal nozzles for precision farming applications
Manuel Vázquez-Arellano1* and Fernando D. Ramírez-Figueroa1
1 Crop Production Robotics (start-up), Institute of Agricultural Engineering, University of Hohenheim, Garbenstrasse 9, Stuttgart 70599, Germany
* manuel@vazquez-arellano.com, mvazquez@uni-hohenheim.com
Introduction
Agriculture is facing enormous challenges: it must provide food, feed, fibre and fuel for an increasing population by using the available arable land more efficiently while avoiding the intense use of resources like fuel, water, pesticides and fertilizers. Additionally, it must also act more ecologically than before and to adapt quickly to new conditions such as soil erosion, water supply limitations and environmental protection in times of climate change.
According to Rockström (2009) the rate of biodiversity loss, climate change and human interference with the nitrogen cycle are systems that are already beyond the safe operation boundary. Unfortunately, the current spraying practice doesn’t address those problems. New technologies such as unmanned spraying systems (UASS) coupled with satellite technology, Big Data and Cloud Computing could help to make the spaying applications more precise. Crop Production Robotics has taken the challenge to tackle those problems in the following way: biodiversity loss through precise delivery of pesticides to the identified pest hotspots to perform a sustainable pest management; climate change through low-emission application technology with the use of electric powered UASS; nitrogen cycle disruptions through precise and demand-driven liquid fertilization, with a more homogeneous droplet size spectra, for an adequate deposition.
Crop Production Robotics satisfies farmer’s need to adapt to the previously mentioned urgent issues, that affect their ability to maintain/drive profitability. Those issues are also stipulated and regulated in the Farm to Fork Strategy (European Union, 2020), which is at the centre of the European Green Deal aiming to make food systems fair, healthy and environmentally-friendly through the following targets by 2030:
Reduce the use and risk of chemical and more hazardous pesticides by 50%
Reduce emissions by 55%
Reduce fertilizer use by at least 20%
Methodology
The strategy of Crop Production Robotics is to design a centrifugal nozzle together with the University of Hohenheim. The droplet size spectra measurement will be performed in order to analyse, not just the typical parameters in the agricultural nozzle industry such as draftable fines (V100) and the droplet size distribution percentiles: Dv10, Dv50 – also known as volume median diameter (VMD) – and Dv90; but more important, the relative span (RS), which is an often-underplayed parameter which provides a measure of the droplet size distribution that will be used as feedback for the mechanical design of the centrifugal nozzle. The RS is calculated with the following equation:
RS=(Dv90 - Dv10)/(Dv 50)
Improving spray quality in the agricultural practice is not just about reducing driftable fines (V100), but about producing the appropriate droplet size distribution to maximize efficacy while minimizing drift potential. Therefore, we identified that the generation of homogeneous droplet size spectra by a centrifugal nozzle (as seen in the left image of Figure 1) is a cornerstone for the implementation of a sustainable spraying practice, moreover, the droplet size spectra could be adjusted for different target crops/applications while also allowing the implementation of a variable rate application.
Figure 1: Comparison between a homogeneous droplet size spectra by a centrifugal nozzle (left), and a heterogeneous by a hydraulic nozzle (right).
VMD alone is a poor way to describe a spray pattern since it provides no information about the droplet size distribution. In Figure 2 it is depicted two different spray droplet size distributions that have the same VMD value: 300μm, but the centrifugal nozzle has a smaller RS value compared to the hydraulic nozzle, meaning that the droplet size spectra of the centrifugal nozzle is more homogeneous around the target droplet size, and thus more effective at sticking to the plant, than the hydraulic nozzle.
Figure 2: Spray pattern characterisation of a centrifugal and hydraulic nozzle with same VMD value but different RS value
As previously mentioned, the main problem of hydraulic nozzles used in intensive agriculture is that they generate a wide droplet size spectra where small droplets can evaporate or drift off, and/or large droplets bounce off or roll off from the target leaf and land on the soil without achieving the desired purpose (see Figure 3). This is the reason why the scientific community estimates that between 90-95% of the pesticides land off-target (Blackmore, 2017) causing severe environmental impacts at the cost of the farmers – who pay for the wasted product. It is common practice to incorporate adjuvants in the spray mixture to improve the droplet behaviour once it has left the nozzle, and overcome barriers such as properties of the solution, structure of the target plant, application equipment, environmental conditions, among others. However, research suggests that adjuvants can be even more toxic than the active principle of the pesticides themselves.
Figure 3: Commercial hydraulic nozzles generate a wide droplet size spectra that wastes pesticide (Source: SKW Stickstoffwerke Piesteritz GmbH; Whitford et al., 2014)
The big picture of the solution proposed by Crop Production Robotics is depicted in Figure 4, where the centrifugal nozzle is the component that performs the actuation, in this case a precise insecticide application, and forms part of the UASS that receives a global navigation satellite system (GNSS) signal and a digital map of pest infestation to perform a precise application. The digital map of pest infestation is generated by the farm management information system (FMIS), and the data is acquired by either remote sensing or an unmanned aerial vehicle (UAV). The UASS and the FMIS exchange bidirectional communication with the use of telematics.
Figure 4: Big picture of the project with an example of pest management
Results
A prototype UASS is being designed and developed (see Figure 5) with a strong focus on the use of European space technology (e.g., Galileo GNSS, Copernicus remote sensing and telematics) to provide security and reliability for the navigation and bidirectional communication between the UASS and the FMIS.
Figure 5: UASS prototype by Crop Production Robotics
Applications and future
The UASS will apply pesticides and liquid fertilizer in a precise manner and with the right amount. The target droplet size generated by the centrifugal nozzle can be modified by setting the rotational speed of the peristaltic pump and the centrifugal nozzle to match the adequate droplet size with the target crop/application. Additionally, variable rate applications are also possible either by modifying the flying speed of the UASS or the flow rate of the peristaltic pump.
Since the UASS is only used a couple of months a year during the spraying season, other future applications inside greenhouses such as cooling, pest and humidity control are thinkable. Additionally, livestock applications such as barn cooling are also possible.
Bibliography
Blackmore, S., 2017. Farming with robots.
European Union, 2020. Farm to Fork Strategy, European Commission.
Rockström, J., 2009. A safe operating space for humanity. Nature 461, 472–475.
Whitford, F., Lindner, G., Young, B., Penner, D., Deveau, J., 2014. Adjuvants and the Power Spray. Purdue Ext.
Recent advances in drone technology and Computer Vision techniques provide opportunities to improve yield and reduce chemical inputs in the fresh produce sector. We demonstrate a novel real world approach which combines remote sensing and deep learning techniques to provide accurate, reliable and efficient counting and sizing of fresh produce crops, such as lettuce, in agricultural production hot spots. In production regions across the world, including UK, USA and Spain, unmanned multispectral aerial vehicles (UAVs) flown over fields acquire high-resolution (~1 cm ground sample distance [GSD]) georeferenced image maps during the growing season. Field boundaries and batch-level zone boundaries are catalogued for the field and provide a unique way for growers to monitor growth in separate regions of the same field to account for unique crop varieties or growth stages. These UAV images undergo an orthomosaic process to stitch and geometrically correct the data. Next, for counting and sizing metrics, we leveraged a Mask R-CNN architecture with an edge agreement loss to provide fast object instance segmentation [1,2]. We optimised and trained the architecture on over 75,000 manually annotated training images across a number of diverse geographies world-wide. Semantic objects belonging to the crop class are vastly outnumbered by background objects on the field, such as machinery, rocks, soil, weeds, fleece material and dense patches of vegetation. Crop objects and background objects are also not colocalised in the same space on the field meaning a single training image suffers class imbalance and in many cases training samples rich with background class labels do not contain a single crop label to discriminate against. We therefore incorporate a novel on-the-fly inpainting approach to insert positive crop labels into completely crop negative training samples to encourage the Mask R-CNN model to learn as many background objects as possible. Our approach achieves a segmentation Intersection over Union (IoU) score of 0.751 and a DICE score of 0.846, with an object detection precision score of 0.999 and a recall score 0.995. We also developed a fast, novel, computer vision approach to detect crop row orientation to display counting and sizing information to the grower at different levels of granularity with increased readability. This approach allows growers an unprecedented level of large-scale insight into their crop and is used for a number of valuable metrics such as establishment rates, growth stage, plant health, and homogeneity, whilst also assisting in forecasting optimum harvest dates and yield (Figure 1a). These innovative science products in turn help reduce waste by optimising and reducing inputs to make key actionable decisions on the field. In addition, counting and sizing allows the generation of bespoke variable rate Nitrogen application maps that can be uploaded straight to machinery and increases crop homogeneity and yield whilst simultaneously reducing chemical usage by as much as 70% depending on the treatment plan (Figure 1b). This brings additional environmental benefits through reduced Nitrogen leaching and promotes more sustainable agriculture.
Figure 1. Example plant counting and sizing outputs. (a) Sizing information per detected plant (measured in cm²) using the Mask R-CNN model trained with edge agreement loss. (b) Variable rate Nitrogen application plan clustered into three rates based on plant size, orientated to the direction of the crop row.
[1] He, K., Gkioxari, G., Dollar, P. and Girshick, R., 2020. Mask R-CNN. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), pp.386-397.
[2] Zimmermann, R. and Siems, J., 2019. Faster training of Mask R-CNN by focusing on instance boundaries. Computer Vision and Image Understanding, 188, p.102795.
People in the Arctic have been experiencing severe changes to their landscapes for several decades. One cause is the thawing of permafrost and thermokarst, which affects the livelihoods of indigenous people. The thawing process of permafrost is also associated with ecological impacts including the release of greenhouse gases.
Thawing is evident from very small-scale changes and disturbances to the land surface, which have been inadequately documented. By fusing local knowledge on landscape changes in Northwest Canada and remote sensing, we seek to thoroughly understand and monitor land surface changes attributable to permafrost thaw. The goal is to improve our knowledge on permafrost thaw impacts through the acquisition and analysis of UAV (Unmanned Aerial Vehicle) and satellite imagery together with young Citizen Scientists from schools in Northwest Canada and Germany. The high-resolution UAV data will be utilized as a ground truthing baseline dataset for further analyses employing optical and radar remote sensing time series data to gain a better understanding of the long-term changes in the region. This approach allows for the expansion of spaceborne remote sensing to very inaccessible regions in the global north while maintaining knowledge of the conditions on the ground. Due to the planned acquisition period of multiple years as well as the fast pace of changing environments on the ground, a change detection is possible within a short time period. Because one of the main goals of this project is the employment of cost-efficient consumer-grade UAVs, flight parameters must be optimized to enable precise 3D-models created by SfM (Structure from Motion), which are comparable over time as well as consistent with the spaceborne remote sensing datasets.
Permafrost soil oftentimes stands out due to its striking polygonal surface features, especially if degradation processes have already set in. These structures range over different spatial scales and can be utilized to determine the grade of degradation. The very high-resolution UAV imagery provide insights into the small-scale thermo-hydrological and geomorphological processes controlling permafrost thaw. Using UAV datasets to deliver labeled datasets to train automatic AI-based classification and image enhancement schemes, land surface disturbances could be detected on the Arctic scale with the high temporal repeat acquisitions of satellite remote sensing platforms. Thus, a comprehensive archive of observable surface features indicating the degree of degradation can be developed. For this, an automated workflow is going to be implemented, deriving the surface features from the acquired datasets with a subsequent analysis and monitoring of permafrost degradation based on classical image processing approaches as well as KI-based classification methods.
In support of these methods, citizen scientist are involved in the classification and evaluation process. To this end, school classes from both countries will participate in "virtual shared classrooms" to collect and analyze high-resolution remote sensing data. Students in Germany will be able to gain a direct connection to Northwest Canada through data and knowledge exchange with class mentors. The goals are to transfer knowledge and raise awareness about global warming, permafrost, and related regional and global challenges. The scientific data will provide new insights into biophysical processes in Arctic regions and contribute to a large-scale understanding of the state and change of permafrost in the Arctic.
The project “UndercoverEisAgenten”, funded by the Federal Ministry of Education and Research in Germany, was initiated in summer 2021.
Remote sensing analyses of high–alpine landslides are required for future alpine safety. In critical stages of alpine landslides, both high spatial and temporal resolution optical satellite and UAS (unmanned aerial system) data can be employed, using image registration, to derive ground motion. The availability of today’s high temporal optical satellite (e.g. PlanetScope, Sentinel-2) data suggests that short-term changes can possibly be detected; however, the limitations of this data regarding qualitative, spatiotemporal, and reliable early warnings of gravitational mass movements have not yet been analysed and extensively tested.
This study investigates the effective detection and monitoring potential of PlanetScope Ortho Tiles (3.125 m, daily revisit rate) satellite imagery between 2017 and 2021. These results are compared to high accuracy UAS orthoimages (0.16 m, 7 acquisitions from 2018-2021). We applied the image registration of phase correlation (PC), a robust area–based algorithm implemented in COSI-Corr, and an intensity–based dense inverse search optical flow (DIS) algorithm performed by IRIS. We investigated mass wasting processes in a steep, glacially–eroded, high–alpine cirque, Sattelkar (2’130-2’730 m asl), Austria. It is surrounded by a headwall of granitic gneiss with a cirque infill, characterised by massive volumes of glacial and periglacial debris, rockfall deposits, and remnants of a dissolving rock glacier. Since 2003, the dynamics of these processes have increased, and between 2012-2015 rates up to 30 m/a were observed.
Both algorithms PC and DIS partially estimate false-positive ground motion, due to poor satellite image quality and imprecise image– and band co–registration. This calculated displacement from satellite data can be estimated if compared to results by UAS imagery. These results are qualitatively supported by manually traceable boulders (< 10 m) from UAS orthophotos.
Displacement calculations from UAS imagery provide knowledge about the extent and internal zones of the landslide body for both algorithms. For the very high UAS spatial resolution data, however, PC is limited to 12 m of ground motion because of decorrelation and ambiguous displacement vectors, and these are a result of excessive ground motion and surface changes. In contrast, DIS returns more coherent displacement rates with no upper displacement limit but some underestimated values. Compared to displacement rates derived from PlanetScope, there are zones of different ground motion similar to the UAS results, while at the same time there is no decorrelation. Nevertheless, for some image pairs, the signal–to–noise ratio is poor, and hot-spots can only be detected based on existing UAS results and the option of the high temporal data.
Knowledge of data potential and applicability is required to detect gravitational mass movements reliably and precisely. UAS data provides trustworthy, relative ground motion rates for moderate velocities, thus enabling us to draw conclusions regarding internal landslide processes. In contrast, satellite data returns results which cannot always be clearly delimited due to the lack of quality spatial resolution, precision, and accuracy. Nevertheless, applying optical flow to landslide displacement analysis improves the results’ validity and shows its great potential for future use. Because the robust PC returns noise when correlation is lost, while DIS does not, true displacement values of DIS are actually underestimated.
[Background]
The workflow for estimating the surface temperature in agricultural fields from multiple sensors needs to be optimized upon testing each type of sensor’s actual user performance. In this sense, readily available miniaturized UAV-based thermal infrared (TIR) cameras can be combined with proximal sensors in measuring the surface temperature. Before these types of cameras can be operationally used in the field, laboratory experiments are needed to fully understand their capabilities and all the influencing factors.
[Research Objectives]
The primary goal of the research is to explore the feasibility of applying different types of miniaturized TIR cameras to field practices requiring high accuracy, such as crop water stress mapping. The controlled-environment experiment results will be used to put forward practical recommendations towards the design of field tests, for obtaining high-precision in field measurements.
Specifically, the influence of the intrinsic characteristics of the TIR camera on the accurate temperature measurement has been tested based on the following research questions: a. How long does it take for the miniaturized TIR cameras to stabilize after being switched on? b. How does the periodic process of non-uniformity correction (NUC) affect the temperature measurements? c. To what extent can we explain the variation within the response across TIR imagery? d. Will changes in sensor temperature have a significant impact on the measured temperature values of the UAV-mounted and handheld TIR cameras? Besides, the influence of environmental factors has also been tested: e. Will the measuring distance have a strong effect on the measured temperature values of UAV-mounted and handheld TIR cameras? f. How do changes in wind and radiation affect the temperature measured by a UAV-mounted TIR camera?
[Methods]
For this study, we used two radiometric TIR cameras designed specifically for the use on a UAV (WIRIS 2nd GEN and FLIR Tau 2), and two handheld cameras only for reference measurements on the ground (FLIR E8-XT and NEC Avio S300SR). All of these miniaturized TIR cameras used a core equipped with a vanadium oxide (VOx) microbolometer focal plane array (FPA), and their working principle was comparable to that of other camera models. Therefore, the practices using these cameras are of significant reference to the tests with other models.
The main research method is to design a series of experiments by controlling variables in a laboratory environment to determine the influence of the ambient environment and the TIR camera's intrinsic characteristics on the accuracy of temperature measurement. Upon all the key parameters and environmental factors being adjusted and quantified, the experimental design of the field tests can be optimized by evaluating the laboratory results.
Five experiments have been conducted for testing the response characteristics of TIR sensors to thermal radiation signals. Two of the experiments were used to explore the influence of the intrinsic characteristics of TIR cameras on the temperature measurements: (a) assessing the stabilization time of TIR cameras, (b) generating calibration curves by measuring the cameras’ responses to different sensor temperatures, indirectly achieved by adjusting the ambient temperature. (c) assessing sensor’s fixed-pattern noise and/or vignetting effects of cameras. The remaining sessions aimed to explain the influence of ambient environmental factors on accurate measurements: (d) the effect of the change in the thickness of the atmospheric layer between the sensor and the target on the measured temperature, caused by the distance variation between the camera and the blackbody, (e) assessing wind and heating effects on temperature outputs of cameras. All sub-experiments in this research used two blackbody calibrators with fixed temperatures of 35 °C and 55 °C to compare the performance of adopted cameras against the target objects.
[Results and Conclusions]
The laboratory experiments in a climate room suggest that the duration of the warm-up period may vary among different models. However, a half-hour for handheld cameras and one hour for UAV-mounted cameras can guarantee acceptable measurement accuracy afterward already. During measurements, automatic NUC’s influence on measurement accuracy should not be neglected. It is recommended to contact the manufacturers for understanding the NUC’s effects based on the differences between the factory calibration and user tests. To diminish the effect of noises in the measured signal, it is recommended to apply signal processing knowledge. Concerning the influence of the cameras’ intrinsic characteristics, the variation in sensor temperature and vignetting effects in images both have negative influences on the measurement accuracy. According to the results in wind and radiation tests and distance tests, ambient environmental influencing factors which occur in field tests should also be counted in the experimental design. The measurement uncertainty may expand to several degrees if these factors are not considered. In noise compensation experiments, pixels toward to edge of the sensors record lower-than-average values while those towards the center record higher-than-average values because of the vignetting effects. Further experiments in fields are needed to exclude the influence of uneven heat distribution over the surface of the blackbody.
Pseudo-satellites are unmanned aerial platforms flying at an altitude of 20 km or above, in the region known as stratospheric airspace. This region is particularly interesting for long term operations due to the absence of meteorological phenomena and the high atmospheric stability. For Earth Observation missions it poses a series of advantages with respect to its space counterparts, such as higher resolution imagery due to proximity to the ground, and more persistent operations as they can continuously fly over the same region for longer time intervals.
Despite the idea of exploiting this region of the atmosphere was first suggested as early as the 1990s, it has not been until now, that the industry has begun to devise new vehicles to provide services from the stratosphere, after several development and feasibility projects have reached more advanced stages. So far, this region has remained largely free of air traffic, and it is expected to reach high occupation, standing out for the presence of very diverse new actors, in the frame of New space, with the appearance of those of private nature in opposition to the traditional concept of operations from national agencies. In this new environment, with the added integration of a very heterogeneous group of vehicles, new approaches have arisen to control large fleets of these high altitude vehicles, resembling satellite constellations. This new concept requires a fundamentally innovative technological and regulatory evolution. This evolution, among others aspects, is related with the safe control and operation of these vehicles, their interactions with each other and with other operators
This work presents the study of the safety of the operations of stratospheric platforms constellations. The assessment is conducted according to Eurocontrol Safety Assessment Methodology (SAM). As there are no applicable frameworks nor procedures defined for pseudo-satellite operations, it has been deemed necessary to analyze, prior to SAM, key catastrophic safety feared events in order to determine the main safety functions to ensure safe operations.
This has lead to the identification of relevant mitigation means and safety requirements that need to be achieved to assure an acceptable level of risk. They have ultimately been compared with current procedures used within its space and low altitude counterparts, which, additionally, has proved their feasibility.
SCC4HAPS
Integrated Satellite and HAPS Control Center
D. I. Gugeanu(1), B. M. Peiro(2), E. R. Jimenez(2), G. D. Muntean(1)
(1) GMV, SkyTower, 32nd floor 246C Calea Floreasca, Sector 1, Bucharest, Romania, Email: daniel.gugeanu@gmv.com, gmuntean@gmv.com
(2)GMV, Isaac Newton 11, P.T.M. Tres Cantos, Madrid, Spain, Email: abmartin@gmv.com, erivero@gmv.com
High-altitude pseudo-satellites (HAPS) are aircrafts (airplanes, airships or balloons) positioned above 20 Km altitude, ideally designed to fly for a long time in the stratosphere, providing services conventionally served by artificial satellites orbiting the Earth.
Due to their capability to stay in a quasi-stationary position in the lower stratosphere, the HAPS combine the desired characteristics of both satellites and terrestrial wireless communications (low-latency and high quality communications), in addition to other considerations like fast deployment and cost.
GMV is currently developing the adaptations required onto off-the-shelf solutions to integrate High Altitude Pseudo-Satellites (HAPS) into satellite control centres and also is developing a prototype of the satellite control system for HAPS within a project for ESA within the ARTES program (ESTEC Contract no 4000132544/20/NL/CLP). The partners in this project are GMV-RO and ATD AEROSPACE RS SRL, supported by two external entities which will be involved for specific activities (HISPASAT and Universidad de Leon).
This activity was started in the context of a renewed interest in HAPS as assets for providing different services, especially telecommunications and remote sensing for civilian or military applications, and aims at its use for providing an integrated monitoring and control centre for large fleets of satellites and HAPS.
The project aim is to be easing the adoption of HAPS by telecommunication satellite operators, by paving the way to integrated multi-layer (satellite, HAPS and ground) operations. The immediate project objective is defining and demonstrating the adaptations needed to their “existing satellite control systems” to operate HAPS in an integrated way. The project objective is also understood to, rather than the development of a specific solution, the establishment of the basis for any future Mission Control Centre development that can target the satellite telecommunications operators market.
In new communications services where satellites and HAPS contribute, the control centre and its operations for both platform and payload should be unique and centralized to effectively orchestrate all the components.
The result of this activity could therefore be used by any ground systems provider to some extent. The specification and design will save some costs that would be recurrent otherwise. More importantly, the value of this project will be the identification of key aspects that will make their offer of future ground products/services commercially attractive to satellite operator willing to adopt HAPS and to HAPS Platform Service Suppliers selling their services to satellite operators.
The mission planning of a large constellation of satellites and HAPS will pose a challenge that can be managed reasonably with current state of the art planning and automation tools. Besides the operational impact, from the technology point of view, the need to handle hundreds of assets in the control centre solutions will be a big challenge per se. New generation software-defined payload enables “dynamic” or “flexible” missions that are defined once the satellite/HAPS is already flying, so that the satellite can be employed for different purposes along its lifecycle. As a consequence, challenges appear in the areas of mission design function and payload control function.
The French Land Data and Services Center: Theia
BAGHDADI Nicolas
INRAE, UMR TETIS, 500 rue François Breton, 34093 Montpellier cedex 5, France
Abstract:
The Theia Land Data and Services Center is a French national inter-agency organization designed to foster the use of Earth Observation images for documenting changes on land surfaces. It was created in 2012 with the objective of increasing the use of space data in complementarity with in situ and airborne data by the scientific community and public actors. The first few years have made it possible to structure the national scientific and user communities, pool resources, facilitate access to data and processing capacities, federate various previously independent initiatives, and disseminate French achievements on a national and international scale. Dissemination and training activities targeting users in other countries have since been developed. Theia is part of the "DataTerra" Research Infrastructure with ODATIS (Ocean Data and Services), ForM @ Ter (Solid Earth Data and Services) and AERIS (Atmospheric Data and Services).
Theia is structuring the French science community through 1) a mutualized Service and Data Infrastructure (SDI) distributed between several centers, allowing access to a variety of products; 2) the setup of Regional Animation Networks (RAN) to federate users (scientists and public / private actors) and 3) Scientific Expertise Centers (SEC) clustering virtual research groups on a thematic domain. A strong relationship between SECs and RANs is being developed to both disseminate the outputs to the user communities and aggregate user needs. The research works carried out in two SECs are presented, and they are organized around the design and development of value-added products and services.
The scientific community and public actors are the main target audience of the action, but the private sector can also benefit from the synergies created by the Theia cluster. Indeed, most of the data is distributed under an open license and the algorithms are open source. The training component, to be consolidated, will contribute to strengthening the capacity of all these users in the longer term.
Index Terms – Theia, France, Land, Spatial Data Infrastructure (DSI), Scientific Expertise Centers (SEC), Regional Animation Networks (RAN), satellite imagery, products, and services
Interferometric SAR observations of surface deformation are a valuable tool for investigating the dynamics of earthquakes, volcanic activity, landslides, glaciers, etc. To evaluate the accuracy of deformation measurements obtained from different existing or potential spaceborne InSAR configurations (different wavelengths, spatial resolutions, look geometries, repeat intervals, etc.), NASA is developing the Science Performance Model (SPM) in the context of the NISAR and follow-on Surface Deformation Continuity missions. The SPM allows for simulating different InSAR configurations and considers the major error sources affecting the accuracy of deformation measurements, such as ionospheric and tropospheric propagation delays or the effects of spatial and temporal decorrelation. In this NASA-funded study, we generated a global temporal coherence and backscatter data set for four seasons with a spatial resolution of 3 arcsec using about 205,000 Sentinel-1 6- and 12-day repeat-pass imagery to complement the SPM with spatially detailed information on the effect of temporal decorrelation at C-band. Global processing of one year of Sentinel-1 Interferometric Wide Swath (IW) repeat-pass observations acquired between December 2019 and November 2020 to calculate all possible 6-, 12-, 18-, 24-, 36-, and 48-day repeat-pass coherence images (6- and 12-day repeat-pass where available) requires fast data access and sufficient compute resources to complete such scale of processing. We implemented a global S1 coherence processor using established solutions for processing Sentinel-1 SLC data. Input data were streamed from the Sentinel-1 SLC archive of the Alaska Satellite Facility and processed with the InSAR processing software developed by GAMMA Remote Sensing (www.gamma-rs.ch) coupled with cloud-scaling processing software employing Amazon Web Services developed by Earth Big Data LLC (earthbigdata.com). The processing was done on a per relative orbit basis and includes co-registration of SLCs to a common reference SLC, calculation of differential interferograms including slope-adaptive range common band filtering, and coherence estimation with adaptive estimation windows, which ensure a low coherence estimation bias of < 0.05. To account for the steep azimuth spectrum ramp in each burst, most of the processing steps are performed in the original burst geometry of the S1 SLCs so that information in the overlap areas of adjacent bursts is processed separately. Terrain-corrected geocoding to the 3x3 arcsec target resolution and simulation of topographic phase relies on S1 precision orbit information and the GLO-90-F Copernicus DEM. Alongside the coherence imagery, backscatter images are processed to radiometrically-terrain-corrected, RTC, level. Seasonal composites of 6-, 12-, etc. coherence imagery as well as RTC backscatter are generated. Based on the coherence values, coherence decay rates were determined per season with an exponential decay model. The processing of the individual coherence images, RTC backscatter images, seasonal coherence and backscatter composites as well as the pixel-level coherence decay modeling results could be completed in about a week with data throughput from SLC to finished tiled products of about 10 TB/hour. The data set is now residing at two open accessible locations, the NASA DAAC at the Alaska Satellite Facility (https://asf.alaska.edu/datasets/derived/global-seasonal-sentinel-1-interferometric-coherence-and-backscatter-dataset/), and the AWS Registry of Open Data (https://registry.opendata.aws/ebd-sentinel-1-global-coherence-backscatter/). A suite of open source visulziation tools have been generated using the python ecosystem to access and visualized this global data set efficiently. These tools take advantage of Jupyter notebook based implementations and efficient metadata structures on top of the openly available data set on AWS. We will present production steps and visuzlization examples in this talk.
Within the framework of the SARSAR project, aiming to use the Sentinel satellite data of the European Copernicus program for the monitoring of redevelopment sites, a processing chain has been developed for change detection and classification. The need for the development of such a methodology arises from the context that the Walloon region, the southern part of Belgium, has to manage an inventory of more than 2220 “Redevelopment Sites” (RDS), which are mainly former abandoned industrial sites, representing a deconstruction of the urban canvas, but also offering an opportunity for sustainable urban planning thanks to their potential for redevelopment. The management of the inventory, which is mostly done by field visits, is costly in terms of both time and resources, and using Earth Observation data is a real opportunity to develop operational tool for the prioritization of the sites to further manually investigates. It allows selecting only the sites presenting signs of changes and already provides indication on what type of change to expect.
The general processing chain we have developed enables us to process the images in order to detect and classify changes and therefore provide a final report with the results directly usable by public authorities. More precisely in SARSAR, it consists of the three following successive blocks. The first block includes the following steps: selection of the relevant Sentinel data (selection of images based on the percentage of clouds for Setinel-2 ...), clipping based on the RDS polygons coming from the inventory vector file, extraction of the sigma0VH from Sentinel-1 and Sentinel-2 indices, linearly interpolation to fill in the gaps and smooth the data using a Gaussian kernel with a standard deviation of 61. These steps lead to the creation of a temporal profile by feature and by RDS. The second block consists first in applying the PELT (Pruned Exact Linear Time) change detection method. It is based on the solution of a minimization problem and is able to provide an exact segmentation of the temporal profiles. This allows to determine if a change has occurred or not, and if so to estimate the date of the change. Secondly, various Sentinel-2 indices and Sentinel-1 sigma0VH are used to determine the type of change (vegetation, building or soil), the direction of the change if any and its amplitude. Finally, the third block is the automatic production of reports, directly usable for the field operators, presenting the results by RDS and providing a priority order of the RDS to be investigated.
The processing chain have been implement in the Belgian Copernicus Collaborative Ground Segment, TERRASCOPE (managed by VITO) which offers, via virtual machines and Jupyter notebooks, pre-processed Sentinel data (L2A Sentinel-2) and computer capacity. This allows the whole workflow to be automated while processing a large amount of data and providing near real-time results.
The TERRA2SAR project presents the improvement done on the codes of the original processing chain, in order to share operational Python Jupyter Notebooks that can be reproduced in various scientific domains. The same type of processing chain could be useful to a larger scientific community and for other types of applications, specifically the monitoring of mid and long-term land-cover changes at a selection of sites of different sizes spread over large areas. For example it could be used to monitor the same type of brownfields but in other countries, as a decision support tool to make the distinction between different types of grasslands (temporary or permanent), to detect changes on specific sites (airports, ports, railroads …) …
The project is divided in two parts, the first one shows Notebook compatible with the standard TERRASCOPE virtual machines configuration. This methodology uses common gdal library and a SQL database engine, SQLite. It uses 8GB RAM, is single-thread due to the sqlite limitations, and accessible to one user at a time. In the end, this methodology is suitable for small or limited data sets either in terms of geographic or temporal footprint. It is easier to read and modify and lends itself better to experimentation. The second part provides Python Jupyter Notebook based on an upgraded TERRASCOPE configuration. This upgrade consists of moving to a dedicated machine with 24 GB RAM, 12 CPU cores, and a personalized PostgreSQL/PostGIS installation. This methodology is more stable and more efficient than the ‘SQLITE methodology’ as it allows faster computation and multi-threading. Moreover, it is accessible to several users/software at a time. As disadvantages, this methodology requires resources that are not part of the standard package for TERRASCOPE users, and more qualified personnel for implementation and maintenance. In the end, this methodology is suitable for production phase of applications which require the manipulation of big data sets. It should be noted that this upgraded version of the TERRASCOPE configuration is provided by VITO only on demand for other projects that might be interested in this configuration.
The Instrument Data Evaluation and Analysis Service for Quality Assurance for Earth Observation (IDEAS-QA4EO) provides an operational solution to monitor the quality of Earth Observation (EO) instrument data from a wide range of ESA satellite missions currently in operations. Within the IDEAS-QA4EO service activities, it has emerged the need to promote better interoperability among the different domains and ease the access and exploitation of EO data, notably for Cal/Val activities.
To this end, a demonstrator pilot started in November 2020 with the main objective of implementing a new working environment where to effectively access the data archive, develop new algorithms, and integrate them into a performing processing environment, also with the possibility to upload ancillary and fiducial reference data and share the code and the results in a collaborative environment.
The Earth Console platform, operated by Progressive Systems, is a scalable cloud-based platform encompassing a set of services to support and optimize the use and analysis of EO data. The Earth Console services are available via the ESA Network of Resources (NoR) and interface the CREODIAS platform containing most of the Copernicus Sentinel satellite data and services, as well as Envisat, Landsat, and other EO data. During the user and system requirements analysis for the pilot project, the Earth Console platform has proved to be a very promising infrastructure solution, and the subsequent development and data analysis activities performed on this environment, focused on ad-hoc Cal/Val use cases, have shown interesting results.
This paper presents the main functionalities and data exploitations possibilities of the implemented solution, by illustrating some sample use cases and demonstrating the advantage of such platform for data validation purposes.
In detail, a statistical analysis of Sentinel-2 Bottom-Of-Atmosphere (BOA) reflectances over a subset of globally spread and spatially homogeneous land sites was performed to investigate the spatial-temporal consistency of these operational products and detect any potential land-cover dependent biases. Furthermore, a validation procedure of S2 BOA products has been implemented: the approach, already used in the Atmospheric Correction Intercomparison Exercise (ACIX), consists in building a synthetic surface reflectance dataset around the AERONET ground-based stations; this dataset is computed by correcting satellite Top-Of-Atmosphere (TOA) reflectances using the AERONET atmospheric state variables and an accurate Radiative Transfer Model (RTM).
As part of Sentinel-3/OLCI validation activities, an assessment of the Bright Pixel Correction algorithm has been performed: OLCI Level 1 products have been extracted over specific coastal areas and processed with the BPC processor to produce marine reflectance. The related turbidity maps were then compared with those obtained from operational Level-2 products. Within the same activity, a validation procedure of marine reflectances has been analyzed and its implementation has already started: given a list of in situ radiometric data, the matchups with Sentinel-3/OLCI data are identified and the related L1 products processed with the BPC algorithm, then the obtained marine reflectances are validated with the in-situ measurements.
A similar approach has been followed for the Sentinel-5p products validation activity: the objective is to implement a procedure to validate the operational products with ground truth datasets. To this end, a subset of in-situ measurements (e.g. AERONET, BAQUNIN) have been selected and the matchups with Sentinel-5p identified. Then, the aerosol and trace gases TROPOMI products have been validated against in-situ data extracted over a temporal window centered at Sentinel-5p overpass time.
The use of the Earth Console platform for these exercises allowed accessing the full S2, S3 and S5p archive together with in situ measurements uploaded to the platform for the purpose. In addition, the Jupyter Notebooks developed within these activities have been made available in a public knowledge library with the main purpose to build a collaborative environment for sharing code and results among different users, enriching the collections of available software, tools and ready-to-use notebooks, promoting algorithm development and fostering interoperability among QA4EO service domains.
Sen2Cor (latest version 2.10) is the official ESA Sentinel-2 processor for the generation of the Level-2A Bottom-Of-Atmosphere reflectance products starting from Level-1C Top-Of-Atmosphere reflectance. In this work, we introduce Sen2Cor 3.0, an evolution of Sen2Cor 2.10 able to perform the processing of Landsat-8 Level-1 products in addition to Sentinel-2 Level-1C products.
In this study, we test the resulting capability of the Sen2Cor 3.0 algorithms (also updated to work in a Python 3 environment) such as the scene classification and the atmospheric correction, to process Landsat-8 Level-1 input data. This work is part of the Sen2Like framework that aims to support Landsat-8-9 observations and to prepare the basis for future processing of large set of data from other satellites and missions. Testing and measuring the capacity of Sen2Cor 3.0 to adapt to different input and reliably produce the expected results is, thus, crucial.
Sentinel-2 and Landsat-8 have seven overlapping spectral bands and their measurements are often complimentary used for studying and monitoring, for example, the status and variability of the Earth’s vegetation and land conditions. However, there are also important differences between these two sensors, such as the spectral-band response, spatial resolution, viewing geometries and calibrations. These differences and quantities are all reflected in their resulting L1 products. A dedicated handling process for those differences is, thus, needed. Moreover, contrary to Sentinel-2, Landsat-8 does not have the water-vapour band that is used by Sen2Cor to perform the atmospheric correction of Sentinel-2 products. Therefore, important information is missing and further implementation is required in order to retrieve the necessary data from external sources to prepare the scene for the Landsat-8 processing. Moreover, new set of Look-Up Tables had to be prepared.
In this work, we address the modifications applied to Sen2Cor and the uncertainty due to the Level 1 to Level 2 processing methodology. Further, we present a qualitative comparison between Sen2Cor 3.0 generated Sentinel-2 and Landsat-8 L2 products and Sen2Cor 2.10 generated Sentinel-2 L2A products. Finally, we list foreseen optimizations for future development.
Sen2Cor is a Level-2A processor with the main purpose to correct single-date Sentinel-2 Level 1C products from the effects of the atmosphere in order to deliver a Level-2A surface reflectance product. Side products are Cloud Screening and Scene Classification (SCL), Aerosol Optical Thickness (AOT) and Water Vapour (WV) maps.
The Sen2Cor version 2.10 has been developed with the aim to improve the quality of both the surface reflectance products and the Cloud Screening and Scene Classification (SCL) maps in order to facilitate their use in downstream applications like the Sentinel-2 Global Mosaic (S2GM) service. This version is planned to be used operationally within Sentinel-2 Ground Segment and for the Sentinel-2 Collection 1 reprocessing.
The Cloud Screening and Scene Classification module is performed prior to the atmospheric correction and provides a Scene Classification map divided into 11 classes. This map does not constitute a land cover classification map in a strict sense. Its main purpose is to be used internally in Sen2Cor’s atmospheric correction module to distinguish between cloudy -, clear - and water pixels. Two quality indicators are also provided: a Cloud - and a Snow confidence map with values ranging from 0 to 100 (%).
The presentation provides an overview of the last evolutions of Sen2Cor including the support of new L1C products with processing baseline >= 04.00 and the provision of additional L2A quality indicators. The different steps of the Cloud Screening and Scene Classification algorithm are recalled: cloud/snow -, cirrus -, cloud shadow detection, pixel recovery and post-processing with DEM information. It will also detail the latest updates of version 2.10 that makes use of the parallax properties of the Sentinel-2 MSI instrument to limit the false detection of clouds above urban and bright targets. Finally, SCL validation results with Sen2Cor 2.10 are included in the presentation.
The recent improvements as well as the current limitations of the SCL-algorithm are presented. Some advices are given on the configuration choices and on the use of external auxiliary data files.
Bayesian cloud detection is used operationally for the Sea and Land Surface Temperature Radiometer (SLSTR) in the generation of sea surface temperature (SST) products. Daytime cloud detection uses observations at both infrared and reflectance wavelengths. Infrared data have a spatial resolution of 1 km at nadir, whilst the nominal resolution of the reflectance channel data is 500 m. For some reflectance channels, observations are made by a single sensor (Stripe A), whilst others in the near infrared include a second sensor (Stripe B).
Operationally, data at reflectance and infrared wavelengths are transferred independently onto image rasters using nearest neighbour mapping. The reflectance channel observations are then mapped to the infrared image grid by averaging the 2x2 corresponding pixels. This methodology does not achieve optimal collocation of the infrared and visible pixels as it neglects the actual location of the observations, and neglects orphan and duplicate observations.
A new SLSTR pre-processor has been developed that increases the field-of-view correspondence between the infrared and reflectance channel observations. This is beneficial for any application using reflectance and infrared wavelengths together, including for cloud detection.
The pre-processor establishes a neighbourhood map of reflectance channel observations for each infrared pixel. It takes into account orphan pixels excluded when compiling the image raster and ensures that duplicate pixels are not double-counted. It calculates the mean reflectance for a corresponding infrared pixel, using a configurable value of ‘n’ nearest neighbours. The standard deviation of the ‘n’ nearest observations can be calculated in this step, providing an additional ‘textural’ metric that has proved to be of value in the Bayesian cloud detection calculation. The pre-processor can also include data from the Stripe B sensor on request, where these data are available.
We demonstrate the improved collocation of infrared and reflectance channel observations using coastal zone imagery, where steep gradients in temperature and reflectance make it easier to visualise the improved collocation of the observations. We also demonstrate the positive impact that this new pre-processor has on the Bayesian cloud detection algorithm, demonstrating that cloud feature representation is improved.
In the last decade advancement in CPU and GPU performance, the availability of large datasets and the proliferation of machine learning (ML) algorithms and software libraries made daily use of ML as a tool not only a possibility, but a routine task in many areas.
Unsupervised and supervised classification, a precursor to more sophisticated ML algorithms, have been extensively used in many scientific areas and have allowed researchers to recognize patterns, reduce subjective bias in categorization and help deal with large datasets. Classification algorithms have been widely used in remote sensing to efficiently identify areas with similar surface coverage and scattering characteristics (urban, agricultural, forest, flooded areas, etc.). Indeed remote sensing is a prime target for developing ML algorithms as the volume and diversity (more frequency channels, multiple satellites) and availability of freely accessible datasets is increasing year-by-year.
The advent of the Copernicus Earth observing program's Sentinel satellites started a new era in satellite remote sensing. The datasets produced by the Sentinel satellites, a vast database of remote sensed images surpassing in volume any previous satellite image database, is available to use by the public. This allowed remote sensing specialists and geoscientists to train and apply ML models utilizing the dataset provided by Copernicus to solve a wide range of processing challenges and classification problems that arise when dealing with such volumes of data.
Synthetic Aperture Radar (SAR) is a relatively novel remote sensing technology that allows the observation of the surface of the Earth in the microwave spectrum. ESA has been a pioneer in utilizing satellite mounted SAR antennas as a means of microwave Earth observation (ERS-1 and 2, Envisat) and the twin Sentinel-1 A and B satellites continue that tradition as dedicated SAR satellites in the Copernicus fleet.
SAR remote sensing has many advantages over „classical” remote sensing, that operates in and around the visible range of the electromagnetic (EM) spectrum. It is an active remote sensing technique, as such it is not dependent on external EM wave sources (e.g. the Sun) and the emitted microwaves are not absorbed by cloud cover and other atmospheric phenomena. Furthermore it is a coherent sensing technique, meaning that the amplitude and phase values of the reflected EM wave are captured. Phase information can be used to create so-called interferograms by subtracting the phase values of a primary SAR image from a secondary one.
The phase difference stored in an interferogram, the interferometric phase, depends on many components, such as the difference of satellite positions when the two images were taken, surface topography, change in atmospheric and ionospheric conditions, the satellite line-of-sight (LOS) component of surface deformation and other factors. By subtracting components other than the deformation component it is possible to estimate the surface deformation map of the imaged area. A critical step in processing the interferogram is the so-called phase unwrapping, which restores 2 \pi phase jumps in phase time and spatial variations, since the phase itself is periodic (wrapped phase).
Phase unwrapping is a non-linear and non-trivial problem. Its success depends on the quality of input interferograms and selected preprocessing step configuration (filters, masking out of incoherent areas, leaving out interferograms from processing).
Many software packages exist that implement some form of phase unwrapping algorithm that have been used successfully in many surface deformation studies (volcano deformation monitoring, detection of surface deformation caused by earthquakes, displacements caused by mining activities, etc.). Despite these successes, phase unwrapping remains a challenge in the field of SAR interferometry (InSAR).
In order to train a ML algorithm a training dataset is necessary, which provides expected outputs to selected inputs. During training a subset of the training database is selected for the actual training of deep neural networks and the rest is used for the validation of that trained algorithm.
ML can be a powerful tool and many interferogram processing steps (removal of atmospheric pase, phase unwrapping, detection of deformation) could benefit from incorporating it in some form. However modern ML algorithms require a vast amount of data and the manual acquisition and labeling of datasets is a cumbersome and tedious task.
Although a substantial amount of interferometric data can be derived from Sentinel-1 A and B SAR images, the (pre)processing and creation of interferograms remains a computationally costly operation. The issue of creating a training dataset of interferograms that can be utilized in various ML frameworks is still unresolved. A perhaps bigger problem is the lack of expected “output” values that are paired with input interferograms (e.g. atmospheric phase delay, unwrapped phase values).
Training on synthetic data is a current trend in ML and applied along with transfer learning and domain adaptation this approach has achieved breakthroughs in various applications. The authors set out to create a software package / library that can be reliably used to generate synthetic interferograms. The package is written in the Python programming language, utilizing its vast ecosystem of scientific libraries. The choice of programming language also allows easy integration with existing ML frameworks available in Python. Different parts of the interferogram generation, such as atmospheric delay and noise generation, as well as the deformation model and its parameters, can be individually configured and replaced by end user defined algorithms, making the code open for extensions.
Creation of synthetic interferograms can also be utilized in the education and training of future InSAR specialists. By tweaking the configuration of interferogram generation aspiring specialists are able to estimate how a change in different parameters (e.g. strength of atmospheric noise, satellite geometry) changes the interferometric phase and the outcome of phase unwrapping.
Digital Earth Australia (DEA) is a government program that enables government, industry, and academia to more easily make use of Earth observation data in Australia. DEA does this by producing and disseminating free and open analysis-ready data from the Landsat and Sentinel-2 missions. Data are processed, stored and indexed into an instance of the Open Data Cube, enabling API-based access to more than thirty years of Earth observation (EO) data.
Making EO data simply available is not enough. Users need to be able to investigate specific applications. Barriers to applying satellite imagery include uncertainty in how the data can be applied to the application, difficulties in accessing the data, and challenges in analysing the petabytes of available data. The DEA notebooks and tools repository on GitHub ("DEA Notebooks") hosts Jupyter notebooks, Python scripts and workflows for analysing DEA satellite data and its derived products. The repository provides a guide to getting started with DEA and showcases the wide range of geospatial analyses that can be achieved using open-source software including the Open Data Cube and xarray. The DEA Notebooks repository steps users through an introduction to the Python packages needed to analyse data and introduces datasets available through DEA. It provides frequently used code snippets for quick tasks such as creating animations or masking data as well as more complex workflows such as machine learning and production of derived products.
A community of practice has evolved around the DEA Notebooks repository. The repository is regularly maintained and updated and meets clearly defined standards of quality, enabled by templates for contributors. A Wiki and user guide are also provided to assist users with accessing DEA, as well as channels for seeking support. Workflows are built upon and committed back to the repository for other users to benefit from. DEA Notebooks has been utilised to teach multiple University degree-level courses across Australia, underpinned peer-reviewed scientific publications, and facilitated two digital art projects. The DEA Notebooks project evolved to drive documentation and user engagement for the DEA program as a whole and is now a rich resource for new and existing users of Earth observation datasets.
The repository can be accessed at https://github.com/GeoscienceAustralia/dea-notebooks.
The AlpEnDAC (Alpine Environmental Data Analysis Center – www.alpendac.eu) is a platform with the aim of bringing together scientific data measured on high-altitude research stations from the alpine region and beyond. It provides research data management as well as on-demand analysis and simulation services via a modern web-based user interface. Thus, it supports the research activities of the VAO community (Virtual Alpine Observatory, including the major European alpine research stations – www.vao.bayern.de).
Our contribution gives an overview of our (meta-)data-management, ingest and retrieval capabilities for a Research Data Management (RDM) following FAIR principles (findable, accessible, interoperable, reusable). Furthermore, we give a technical glimpse on AlpEnDAC’s capabilities regarding “one-click” simulations and the integration of satellite data to allow for a side-by-side analysis with in-situ measurements.
We then focus on AlpEnDAC’s on-demand services, which are a principal result of the 2019-2022 development cycle (AlpEnDAC-II). We have implemented Computing-on-Demand (CoD, simulations on a click) and Operating-on-Demand (OoD, remote instrument control, based on measurement events when needed), with more “Service on Demand” (e.g. notifications on measurement events) applications to follow. Data from measurements (or also simulations) are normally ingested via a representational state transfer application programming interface (REST API) into the AlpEnDAC system. This interface is complemented by an asynchronous data-ingest layer, based on a message queue (Apache Kafka) and a series of specialized workers to process the data. For OoD, the data processing path is augmented with an interface to request observations of a FAIM (Fast Airglow IMager) camera, and with an automatic scheduler to optimally execute them. The schedule and the data retrieved according to it remain associated within the AlpEnDAC system, allowing for a complete understanding of the measurement process also in retrospect. All on-demand services are made configurable, as much as possible, via the AlpEnDAC web portal. With these developments, we aim to enable scientists – also the ones with a less computer-centric scope of work – to leverage NRT data collection and processing, as it is already an everyday tool e.g. in the Internet-of-Things sector and in commercial applications.
The AlpEnDAC platform has been using infrastructure of the German Aerospace Center (DLR) and the Leibniz Supercomputing Centre (LRZ), major players in Europe’s data and computing centre landscape. AlpEnDAC-II is funded by the Bavarian State Ministry of the Environment and Consumer Protection.
The Sentinel-5P/TROPOMI instrument is the first Copernicus sensor that is fully dedicated to measuring atmospheric composition. Since its launch in October 2017, it has provided excellent results that have led to numerous scientific papers on topics related to ozone, air quality and climate. Yet, the potential use of TROPOMI data reaches beyond the direct scientific community.
With support from ESA, Belgium has established the Terrascope platform, hosted by the Flemish Institute for Technological Research (VITO). This so-called Belgian Copernicus Collaborative Ground Segment is a platform that enables easy access to and visualisation of Copernicus data for all societal sectors, the development and implementation of tailored or derived products based on Copernicus measurements and the development of innovative tools.
With support from ESA, BIRA-IASB has cooperated with VITO to implement TROPOMI Level-3 products into Terrascope, with a focus on the generation of global TROPOMI Level-3 NO2 and CO products. Now operational within Terrascope, the system produces Level-3 datasets of daily, monthly, and yearly CO and NO2 columns. Additional features allow for the generation of enhanced statistics (for example the effects of weekends on NO2 levels originating from traffic) and quick generation of dedicated data sets in the case of special events. For both products, the Terrascope platform provides an attractive user experience, with the option to explore areas of interest, compare data for different time frames, and save data and imagery.
In the ESA-supported follow-up project Terrascope-S5P, BIRA-IASB is developing new products for inclusion in Terrascope. After the successful demonstration of the global NO2 and CO products, the service is being extended to global SO2 and CH4 monitoring, an improved NO2 product for Europe (contribution by KNMI) as well as NO2 surface concentrations over the Belgian domain. As such, Terrascope provides an opportunity to develop innovative aspects of the Copernicus products that can be demonstrated on the regional domain before being possibly extended to a larger scale.
This presentation describes the current status of the TROPOMI products in Terrascope, outlines the details of the applied techniques and provides an outlook on future additions.
The Forestry Thematic Exploitation Platform (Forestry TEP) has been developed and made available as an online service to enable researchers, businesses and public entities to efficiently apply satellite data for various forest analysis and monitoring purposes. A key aspect of Forestry TEP is the capability it offers for users to develop and onboard new services and tools on the platform and to share them.
We are on the way to build an ecosystem for Earth observation services on Forestry TEP. The core team operating the platform is continuously growing the pool of tools, but ever more importantly we want to gather service providers and academia to install their own tools on the platform.
The current offering on the Forestry TEP (https://f-tep.com) includes several open-source processing services created in the original F-TEP development project funded by ESA. These core services enable, e.g., vegetation index calculations, basic forest change monitoring and land cover mapping. The open-source offering also includes the Sen2Cor algorithm (versions 2.8.0 and 2.5.5) for atmospheric corrections, Fmask 4.0 for cloud and cloud shadow detection, pre-processing tools for Sentinel-1 stacking and mosaicking as well as for Sentinel-2 tile combination, and image manipulation and arithmetic services based on GDAL. Additionally, applications with their own graphical user interfaces are available to use via the browser; this offering includes the SNAP Toolbox, QGIS as well as Monteverdi, an interface to the Orfeo ToolBox. A highly specialized new offering is ALSMetrics, which allows to derive metrics from airborne laser scanning data into a format that facilitates joint use with Sentinel-2 data.
Several parties have introduced sophisticated tools and services on the platform as proprietary offering that can currently be accessed via separate licensing agreement. These include the VTT services AutoChange and Probability. AutoChange is a tool for change detection and identification based on hierarchical clustering, while Probability enables estimation of forest characteristics based on local reference data. Some of the proprietary services may later be made more directly available as part of a packaged platform offering.
Forestry TEP is currently being exploited in many significant projects, each of which is producing novel services and tools to be made available on the platform. Services that were largely developed in the EU Horizon 2020 Innovation Action project Forest Flux (https://forestflux.eu/) comprise a seamless processing chain from the estimation of forest structural variables to computing carbon assimilation maps. Key ESA initiatives on the platform include the Forest Digital Twin Earth Precursor (https://www.foresttwin.org/) and the recently launched Forest Carbon Monitoring project (https://www.forestcarbonplatform.org/).
The Developer interface on the platform provides flexible options for creation of new services. Any new service can be utilized by the developer privately or shared to a select group of colleagues or customers. For the widest applicability and benefit, the new services can be made publicly available to all, with a case-by-case agreement concerning licensing. All services on Forestry TEP can be accessed also from outside the platform via the offered REST and Python APIs.
We invite all developers in the forestry domain to participate in the building of a strong ecosystem of services on the Forestry TEP.
Recent years have witnessed a dynamic development of open source software libraries and tools that deal with the analysis of geospatial data. The European Commission Joint Research Centre (JRC) has released a Python package, pyjeo, as open source under the GNU public license (GPLv.3). It has been written by and for scientists and builds upon existing open source software libraries such as the GNU scientific library (GSL) and GDAL. Its design allows for an easy integration with existing libraries to take fully advantage of the plethora of functions these libraries offer. Extra care was hereby taken on selecting the underlying data model to avoid unnecessary copying of data. This minimizes the memory footprint and does not involve time consuming disk operations. With increasing EO data volumes at an unprecedented pace, this has become particularly important.
A multi-band three-dimensional (3D) data model was selected, where each band represents a 3D contiguous array in C/C++ of a generic data type. The lower level algorithmic part of the library, where processing performance is important, has been written in C/C++. Parallel computing is introduced using the open-source library openMP. Through the Simplified Wrapper and Interface Generator (SWIG) modules, the C/C++ functions were ported to Python. Python is an increasingly used programming language within the scientific computing community with popular libraries dealing with multi-dimensional data processing such as SciPy ndimage and xarray. Important within the context of this work is that Python allows for easy interfacing with C/C++ libraries by providing a C-API to get access to its Numpy array object. This allows pyjeo to smoothly integrate with packages such as xarray and by extension other packages that use the Numpy array object at their core.
In this talk, we will present the design of pyjeo and focus on how it has been integrated in the JRC Big Data Analytics Platform (BDAP). For instance, we will show how virtual data cubes are created to serve various use cases at the JRC that are based on Sentinel-1 and Sentinel-2 collections. We will also introduce the BDAP as an openEO compatible backend for which pyjeo was used as a basis and where scientists can deploy their EO data analysis workflows without knowing the infrastructure details. Finally, results on optimal parallel processing strategies will be discussed.
Environmental observations from satellites and in-situ measurement networks are core to understanding climate change. Such datasets need to have uncertainty information associated with them to ensure their credible and reliable interpretation. However, this uncertainty information can be rather complex, with many sources of error affecting the final products. Often, multiple measurements are combined throughout the processing chain (e.g. performing temporal or spatial averages). In such cases, it is key to understand error-covariances in the data (e.g., random uncertainties do not combine in the same way as systematic uncertainties). This is where approaches from metrology (the science of measurement) can assist the Earth observation (EO) community to develop quantitative characterisation of uncertainty in EO data. There have been numerous projects aimed at developing (e.g. QA4ECV, FIDUCEO, GAIA-CLIM, QA4EO, MetEOC, EDAP) and applying (e.g. FRM4VEG, FRM4OC, FDR4ALT, FDR4ATMOS) a metrological framework to EO data.
Presented here is the CoMet toolkit, which stands for “Community tools for Metrology”, which has been developed to enable easy handling and processing of dataset error-covariance information. This toolkit aims to abstract away some of the complexities in dealing with covariance information. This lowers the barrier for newcomers, and at the same time allows for more efficient analysis by experts (as the core uncertainty propagation does not have to be reimplemented every time). The CoMet toolkit currently consists of a pair of python modules, which will be described in detail.
The first module, obsarray, provides an extension to the widely used xarray package to interface with measurement error-covariance information encoded in datasets. Although storage of full error-covariance matrices for large observation datasets is not practical, they are often structured to an extent that allows for simple parameterisation. obsarray makes use of a parameterisation method for error-covariance information, first developed in the FIDUCEO project, stored as attributes to uncertainty variables. In this way the datasets can be written/read in a way that this information is preserved.
Once this information is captured, the uncertainties can be propagated from the input quantities to uncertainties on the measurand (the processed data) using standard metrological approaches. The second CoMet python module, punpy (standing for `Propagating Uncertainties in Python’), aims to make this simple for users. punpy allows users propagate obsarray dataset uncertainties through any given measurement function, using either the Monte Carlo (MC) method or the law of propagation of uncertainty, as defined in the Guide to the expression of Uncertainty in Measurement (GUM). In this way, dataset uncertainties can be propagated through any measurement function that can be written as a python function – including simple analytical measurement functions, as well as full numerical processing chains (which might e.g. include external radiative transfer simulations), as long as these can be wrapped inside a python function. Both methods have been validated against analytical calculations as well as other tools such as the NIST uncertainty machine.
punpy and obsarray have been designed to interface with each other. All the uncertainty information in the obsarray products can be automatically parsed and passed to punpy. A typical approach would be to separately propagate the random uncertainties (potentially multiple components combined), systematic uncertainties and structured uncertainties, and return them as an obsarray dataset that contains the measurand, the uncertainties and the covariance information of the measurand. Jupyter notebooks with tutorials are available. In summary, by combining these tools, handling uncertainties and covariance information has become as straightforward as possible, without losing flexibility.
Underpinning EO-based findings with field-based evidence is often indispensable. However, especially in field work, there are countless situations where access to web-based services like Collect Earth or the Google Earth Engine (GEE) is limited or even impossible, such as in rainforests or deserts across the globe. Being able to visualize Earth observation (EO) time series data offline “in the field” improves the understanding of environmental conditions on the spot, and supports the implementation of field work, e.g., during planning of day trips and communication with local stakeholders. More broadly, there are various cases where EO time series, derived products, and additional geospatial information, like VHR images and cadastral data, exists in local data storages and needs to be visualized. For example, to better understand land cover and timing of land use changes, such as deforestation or agricultural management events, or gradual changes associated with degradation and regrowth.
Several specialized software tools have been developed to support the visualization of EO time series data. However, most of these tools work only on single platforms, with selected input data sources, and with specific response designs. There is a need for flexible tools that can visualize multi-source satellite time series consistently and aid reference data collection, e.g., for training and validation of supervised approaches.
To overcome these limitations, we developed the EO Time Series Viewer, a free and open-source plugin for QGIS (Jakimow et al. 2020). It provides a graphical user interface for an integrated and interactive visualization of the spectral, spatial, and temporal domains of raster time series from multiple sensors. It allows for a very flexible visualization of time series data in multiple image chips relating to (i) different observation dates, (ii) different band combinations, and (iii) across sensors with different spatial and spectral characteristics. This spatial visualization concept is complemented by (iv) spectral- and (v) temporal profiles that can be interactively displayed and compared between different map locations, sensors and spectral bands or derived spectral index formulations.
The EO Time Series Viewer accelerates the collection (“labeling”) of reference information. It provides various short-cuts to focus on areas and observation dates of interest, and to describe them based on common vector data formats. This helps, for example, to create training data for supervised mapping approaches, or to label large numbers of randomly selected points required for accuracy assessments.
Being a QGIS plugin, the EO Time Series Viewer supports a wide range of data formats and can be used across different platforms, offline or in cloud-services, in commercial and none-commercial applications, and together with other QGIS plugins, like the GEE Timeseries explorer, that is specialized on accessing cloud-based GEE datasets.
We will demonstrate the EO Time Series Viewer and its visualization & labeling concepts using a multi-sensor time series of Sentinel-2, Landsat, RapidEye and Pleiades observations for a field study site in the Brazilian Amazon. Furthermore, we will share our experiences in developing within the QGIS ecosystem and give an outlook on future developments of the EO Time Series Viewer.
In the framework of the French research infrastructure Data Terra, the Solid Earth data and services centre named ForM@Ter has developed processing services available on its website.
ForM@Ter aims to facilitate data access and to provide processing tools and value-added products with support for non-expert users. Among the ForM@Ter’s scientific topics, one focuses on surface deformation from SAR and optical data. The associated services are implemented considering the needs expressed by the scientific community to support the use of the massive amount of data provided by satellites missions. This massive influx of data requires new processing schemes, and significant computing and storage facilities not available to every researcher.
The objective of this work is to present the on-demand services GDM-OPT and DSM-OPT tailored for the application of Sentinel-2 and Pléiades data for researchers.
GDM-OPT (Ground Deformation Monitoring with OPTical image time series) enables the on-demand processing of Sentinel-2 image time series (from PEPS, the French Collaborative ground segment for Copernicus Sentinel program operated by CNES). It is declined in three services according to target scientific applications : monitoring landslide persistent motion; Earthquake-triggered crustal deformation; monitoring glacier and ice sheet persistent motion.
DSM-OPT (Digital Surface Models from OPTical stereoscopic very-high resolution imagery) allows the generation of surface models and ortho-images from Pléiades stereo- and tri-stereo acquisitions.
These services are accessible for the French science community on ForM@Ter website and for the international science community and other users on the Geohazards Exploitation Platform (GEP/ESA).
They build on the MicMac (IGN/Matis; Rosu et al., 2015 ; Rupnik et al., 2016, 2017), GeFolki (ONERA; Brigot et al., 2016), CO-REGIS (CNRS/EOST; Stumpf et al., 2018), MPIC (Stumpf et al., 2014), TIO (CNRS/ISTerre; Bontemps et al., 2017) and FMask (Texas Tech University; Qiu et al., 2019) algorithms. They are deployed on the high-performance infrastucture A2S hosted at the Mesocentre of University of Strasbourg and were setup with the support of ESA, CNES and CNRS/INSU.
A demonstration of these services will be given as well as a presentation of their architecture, operation and of several use case examples.
OpenAltimetry [1] is an open data discovery, access and visualization platform for spaceborne laser altimetry data, specifically data from the Ice, Cloud and land Elevation Satellite (ICESat) mission and its successor, ICESat-2. Developed under a 2015 NASA ACCESS grant, its intuitive, easy-to-use interface quickly became a favorite method for browsing and accessing ICESat-2 data among both expert and new users. As the popularity of computational notebooks became grew, OA began offering APIs for programmable access to user-requested subsets of ICESat-2 data. NASA’s Distributed Active Archive Center (DAAC) at the National Snow and Ice Data Center (NSIDC) has made migration of ICESat-2 data into NASA’s Earthdata Cloud [2] a priority, thereby establishing a roadmap for cloud-optimized data services facilitating wide accessibility and demonstrated value for users in a cloud environment. OpenAltimetry, which was developed independently of NASA’s Earth Observing System Data and Information System (EOSDIS), is likewise being migrated into the Earthdata Cloud, a pathfinding effort in technology infusion for NASA. At the same time, OpenAltimetry continues to add new functionality in response to users’ needs and has prototyped the addition of data from the Global Ecosystem Dynamics Investigation (GEDI) mission. This presentation will highlight the processes employed, and challenges encountered in bringing OpenAltimetry and ICESat-2 data into the cloud.
[1] Khalsa, S.J.S., Borsa, A., Nandigam, V. et al. OpenAltimetry - rapid analysis and visualization of Spaceborne altimeter data. Earth Sci Inform (2020). https://doi.org/10.1007/s12145-020-00520-2
[2] https://earthdata.nasa.gov/eosdis/cloud-evolution
The evaluation of multi-sensor image geolocation and co-registration is a fundamental step of any mission commissioning and in-orbit validation (IOV) phases, allowing to detect possible misalignments between instruments and platform introduced by the vibrations that happen during launch. At these stages, information about existing misalignments is crucial to support sensor calibration activities. The image evaluation considers a quantitative analysis of remote sensing data, typically L1b products acquired over different earth locations, in relation to a reference image composed of reference geographical features (Geolocation) or to data produced by another sensor aboard (Co-registration).
This work addresses the design, development, verification and operation of the set of tools provided by Geolocation and Co-registration (GLANCE) toolbox, which will be used during the in-orbit verification activities of the MetOp-SG mission. The MetOp-SG is the space segment of the EPS-SG (EUMETSAT Polar System Second Generation) and it is comprised of two spacecraft (MetOp-SG A and B). During the MetOp-SG IOV activities the geolocation and co-registration checks for which GLANCE will be used focuses on evaluating the L1b data provided by Sentinel-5, 3MI, METimage, MWS and IASI-NG (MetOp-SG A), and MWI and ICI (MetOp-SG B). GLANCE enables several combinations of multiple sensor images for purposes of geolocation: 3MI, MWS, MWI, ICI; and co-registration: 3MI vs Sentinel-5, and METimage vs Sentinel-5. Although originally designed for MetOp-SG, GLANCE leverages on a generic component-based architecture to provide a comprehensive toolbox regarding image processing functionality (i.e. image convolution, edge detection, thresholding) which can easily be extended to support other sensors and/or missions. GLANCE integrates several capabilities available in open source packages (e.g. OpenCV, GDAL) with specifically designed functionality, such as the generation of reference images based on geographical characterization data.
Considering that each sensor image has specific processing requirements, GLANCE enables the user to compose multiple processing steps into a customizable processing chain, and in order to perform batch processing of images, GLANCE applies autonomously the transformations specified by a processing chain, evaluating the existence of misalignments without human intervention. This automatic processing capabilities are necessary, considering the reduced time extent available for IOV activities. Moreover, GLANCE design and development has taken runtime performance to process the expected IOV images into consideration and takes advantage of parallelism whenever supported by the hardware.
After describing the context of operations during the IOV phase, including the interaction with other ground segment components, the GLANCE toolbox architecture and design will be introduced along with the description of the processing steps performed while evaluating the images of the multiple sensors. The catalogue of processing capabilities and algorithms will be presented together with preliminary results.
The production of precise geospatial information has become a major challenge for many applications in various fields such as agriculture, environment, civil or military aviation, geology, cartography, marine services, urban planning, natural disasters, etc.
These applications would greatly benefit from both automation as well as Big Data scalability to increase work efficiency as well as final products throughput, quality, and availability.
Our ambition is to answer these difficulties by developing a Jupyter-based, AI oriented platform for Earth Observation (EO) data processing whose architecture offers a fully automated chain of production of highly detailed images.
At the core of the platform lies the Virtual Research Environment (VRE), a collaborative prototyping environment based on JupyterLab that relies on WEB technologies and integrates tools required by scientists and researchers. The VRE allows selecting, querying, and performing in-depth analysis on 2D and 3D geographic data via a simple web interface with performance and reactivity that makes it possible to quickly display large EO products within a web browser. The environment is not solely based on Jupyter since it also offers an IDE (Code Server) and a remote desktop for using specific software such as QGIS.
The users can therefore execute specific software remotely to manipulate remote data without any data transfer from distant repositories to their computers.
The objective is to offer a turnkey service that facilitates access to data and computing resources. All the major required tools and libraries are open source and available for scientific analysis (e.g. sklearn), geographic data processing (e.g. Orfeo ToolBox, OTB), deep learning (e.g. Pytorch), 2D and 3D plotting, etc. The installation, configuration, and compatibility between this palette of tools is ensured at the platform’s level which avoids both hardware and software constraints at the final users’ level who can concentrate on the scientific work instead of resolving dependencies conflicts.
To ease access to input products, EODAG, an open-source Python SDK for searching aggregated and downloading remote images has been integrated into the JupyterLab environment via a plugin that allows to search products by drawing ROI on an interactive map with specific search criteria. With EODAG, the user can also directly access the pixels: for example, a specific band of a product at a given resolution and geographic projection. This feature improves productivity and lowers infrastructure costs by drastically reducing download time, bandwidth usage, and user’s disk space.
Once the products have been selected and downloaded into its online home folder, the user can rapidly prototype and execute scientific analyses and computations using JupyterLab: from simple statistics to complex deep-learning modeling and inference with libraries such as Pytorch or Tensorflow and associated tools such as tensorboard to help measure and visualize the machine learning workflow directly from the web interface.
Our platform is not only a prototyping tool: processing or transforming EO products often rely on complex algorithms that require heavy computation resources. To improve their efficiency, we offer computation parallelism or distribution (on Cloud, or on premise even without Kubernetes) using technologies such as Dask for computation parallelism or Dask Distributed and Ray for distributed computing. The main advantage of Dask is that it is a Python framework that relies mainly on the most widely used data analysis tools and technologies (e.g. pandas, NumPy). Therefore, it allows researchers to reuse existing code and benefit from multiple nodes computing with very little programming effort. The Dask dashboard is available within the web browser, or as a frame into a Jupyter Notebook, to monitor the status of workers (CPU, memory, ...) or tasks and to check Dask’s graphs execution in real-time.
When their analyses are completed, the users can explore and visualize data in several ways. From a Jupyter Notebook with standard visualization libraries for regular 2D or 3D products or from the remote desktop using e.g., QGIS for geographic data.
For larger products that cannot be properly handled by these libraries (e.g., matplotib, Bokeh) we have developed and integrated into the platform specific libraries that allow to display in a smooth and reactive way both 2D (QGISlab) or 3D (view3Dlab) products into Jupyter Notebooks.
Finally, the users can share their developments and communicate their analysis to third parties by transforming Jupyter Notebooks into operational services with “Voilà”. “Voilà” converts notebooks into interactive dashboards including HTML widgets, served from a webpage that can be interacted with using a simple browser.
The platform targets both cloud and high-performance computing centers deployments. It is used today in production mode for example in the AI4GEO project and at the French space agency (CNES).
The Cloud deployment of the VRE has also been done for the EO Africa project, which fosters an African-European R&D partnership, facilitating the sustainable adoption of the Earth Observation and related space technology in Africa – following an African user driven approach with a long-term (>10 years) vision for the digital era in Africa. Thanks to the use of contenerization technologies our VRE can be deployed easily on any DIAS and benefit from its infrastructure and data access. For EO Africa, Creodias has been selected: it provides direct access to a large amount of Earth Observation (EO) data and all the requirements to deploy our platform. Throughout the project life cycle, multiple versions of the VRE will be created to fulfill the needs of various events.
The platform was used during a hackathon in November 2021 with up to 40 participants, each of them with access to their own instance of the VRE, ready to visualize and transform EO Data using Jupyter-based tools. Each participant can work independently or collaborate by sharing their work with its own team directly within the VRE thanks to a shared directory. On top of that, the VRE provides another tool to share, save and keep the history of all the work done by people involved in the EO Africa project.
For more than 20 years, the Centre Spatial de Liège (CSL) has developed Synthetic Aperture Radar software solutions in a suite called CIS (CSL InSAR Suite). CIS is a command-line software written in C, dedicated to the processing of synthetic aperture radar data, allowing the production of analysis-ready outputs such as displacement maps, flood extend, or fire monitoring. Advanced methods are also included, making CIS distinct from other competing SAR suites.
With more than 500 000 registered users, the open-access SentiNel Application Platform - SNAP, developed since 2015 by Brockmann Consult (Hamburg, Germany), has become the standard tool for processing remote sensing data. It was originally tailored to Sentinel 1-3 images, but now accommodates data from most common satellite images, including non-ESA missions (e.g., ICEYE, NOVASAR). The largest part of SNAP users belongs to the radar (Sentinel-1) community. SNAP integrates classical operators of remote sensing, including data reading, co-registration, calibration, raster algebra, and so on. A major particularity of practical and strategical interest is that SNAP is available is golden-open-access, allowing to access directly to the core codes and modify it. Moreover, SNAP supports the inclusion of plugins with a cookbook to developers.
This abstract reports the work of progressive inclusion of the CIS software modules into the SNAP open-source software, as plugins. To fulfill this objective, we are using the Standalone Tool Adapter of SNAP to include external command-line functionalities. The objective of the tool adapter is to create the paths that will link the external application to the SNAP software. We started the migration using a series of simple to complex tasks in different programming languages (C/C++, Python, and Matlab).
CIS plugins in SNAP will be accessible from a new dedicated menu in the user interface. Currently, we integrated the coherence tracking and multiple aperture interferometry to the SNAP software. Additional tools will be included in future developments.
During the event, a presentation of the different tools will be performed. Interested scientists are invited to contact directly the authors to request help with the installation of the plugin at the session.
As the size and complexity of the Earth observation data catalogue grows, the ways in which we interface with it must adapt to accommodate the needs of end users, both research-focussed and operational. Consequently, since 2020 EUMETSAT have introduced a suite of new data services to improve the ability of users to view, access and customise the Earth observation data catalogue they provide. These services, which are now operational, offer both GUI- and API- based services and allow fine grained control over how users interact both with products, and the collections they reside in. They include, i) the new implementation of the EUMETView online mapping service (OMS), ii) the EUMETSAT Data Store for data browsing, searching, downloading and subscription, and iii) the Data Tailor Web Service and standalone tool for online and local customisation of products.
From early 2022, these services will also support the dissemination of the EUMETSAT Copernicus Marine Data Stream, including the Level-1 and Level-2 marine products from both the Sentinel-3 and Sentinel-6 missions at both near real-time and non-time-critical latency.
Here, we give an overview of the capability of these data services, with examples of how to use them via web interfaces and, in an automated fashion via APIs. These examples will focus on interaction with the Copernicus marine products provided by EUMETSAT. In addition, we will outline the tools and resources that are available to assist users in incorporating these services into their workflows and applications. These include online user guides, python libraries and command line approaches to facilitate data access, and a suite of self-paced training resources and courses. This poster presentation will include demonstrations of the services, information on plans and schedules for the inclusion of future data streams, and the opportunity for new and experienced users to ask questions and give feedback.
Pangeo is first and foremost an inclusive community promoting open, reproducible and scalable science. This community provides documentation, develops and maintains software, and deploys computing infrastructure to make scientific research and programming easier.
There is no single software package called “Pangeo”; rather, the Pangeo project serves as a coordination point between scientists, software, and computing infrastructure.
Pangeo is based around the Python programming language and the scientific Python software ecosystem. The Pangeo stack is an agile collection of open-source Python tools which, when combined, enables efficient and flexible distributed processing of large geospatial datasets, so far primarily used in the ocean, weather, climate, and remote sensing domains but equally relevant throughout the whole geospatial field.
The Pangeo software ecosystem involves open source tools such as xarray, a data model and analysis toolkit based on the NetCDF data model; Zarr for cloud-optimised data storage; Dask, a framework for parallel computing; and Jupyter for user interaction with remote computing systems.
The Pangeo tools can be adapted to meet a wide range of different usage scenarios and be deployed on many different architectures. The community is focused on acting as a coordinating point between scientists and engineers, software and computing infrastructure.
In this presentation we would like to showcase real-world applications of the Pangeo stack and discuss with all stakeholders how Pangeo can be a part of the European approach to geospatial “Big Data” processing that is sustainable in the long term, inclusive in that it is open to everyone, flexible and open enough to allow us to smoothly move from one platform to another.
Come and learn about a pace-making, fully open source initiative that is already at the core of many data cube implementations and gathering the European community to participate in this global initiative. Pangeo (https://pangeo.io/) has a huge potential to become a common gateway able to leverage a wide variety of infrastructures and data providers.
Satellite SAR interferometry (InSAR) is a well-established technique in Earth Observation that is able to monitor ground displacement with a high precision (up to mm/year), combining high spatial resolution (up to a few m) and large coverage capabilities (up to continental scale) with a temporal resolution from a few days to a few weeks. It is used to study a wide range of phenomena (e.g. earthquakes, landslides, permafrost, volcanoes, glaciers dynamics, subsidence, building and infrastructure deformation, etc.).
For several reasons (data availability, non-intuitive radar image geometry, complexity of the processing, etc.), InSAR has long remained a niche technology and few free open-source tools have been dedicated to it compared to the widely-used multi-purposes optical imagery. Most tools are focused on data processing (e.g. ROI_PAC, DORIS, GMTSAR, StaMPS, ISCE, NSBAS, OTB, SNAP, LICSBAS), but very few are tailored to the specific visualization needs of the different InSAR products (interferograms, network of interferograms, datacube of InSAR time-series). Similarly, generic remote-sensing or GIS software like QGIS are also limited when used with InSAR data. Some visualization tools with dedicated InSAR functionality like the pioneer MDX software (provided by the Jet Propulsion Lab, https://software.nasa.gov/software/NPO-35238-1) were designed to visualize a single radar image or interferogram, but not large datasets. The ESA SNAP toolbox also offers nice additional features to switch from radar to ground geometry.
However, new spatial missions, like the Sentinel-1 mission of the European program COPERNICUS with a systematic background acquisition strategy and an open data policy, provide unprecedented access to massive SAR data sets. Those new datasets allow to generate a network of thousands of interferograms over a same area, from which time-serie analysis results in spatio-temporal data cube: a layer of this data cube is a 2D map that contains the displacement of each pixel of an image relative to the same pixel in the reference date image. A typical data cube size is 4000x6000x200, where 4000x6000 are the spatial dimensions (pixels) and 200 is a typical number of images taken since the beginning of the mission (2014). The aforementioned tools are not suited to manage such large and multifaceted datasets.
In particular, fluid and interactive data visualization of large, multidimensional datasets is non-trivial. If data cube visualization is a more generic problem and an active research topic in EO and beyond, some specifics of InSAR (radar geometry, wrapped phase, relative measurement in space and in time, multiple types of products useful for interpretation…) call for a new, dedicated visualization tool.
We started the InSARviz project with a survey of expert users in the French InSAR community covering different application domains (earthquake, volcano, landslides), and we identified a strong need for an application that allows to navigate interactively in spatio-temporal data cubes.
Some of the requirements for the tools are generic (e.g., handling of big dataset, flexibility with respect to the input formats, smooth and user-driven navigation along the cube dimensions) and other more specific (relative comparison between points at different location, selection of a set of pixels and the simultaneous vizualisation of their behavior in both time and space, visualization of the data in radar and ground geometries…)
To meet those needs we designed the InSARViz application with the following characteristics:
- A standalone application that takes advantage of the hardware (i.e. GPU, SSD hard drive, capability to run on cluster as a standalone application). We choose the Python language for its well-known advantages (interpreted language, readable, large community) and we use QT for the graphical user interface and OpenGL for the hardware graphical acceleration.
- Using the GDAL library to load the data. This will allow to handle all the input formats that are managed by GDAL (e.g. GeoTIFF). Moreover, we designed a plug-in strategy that allows users to easily manage their own custom data formats.
- We take advantage of Python/QT/OpenGL stack that ensures efficient user interaction with the data. For example, the temporal displacement profile of a point is drawn on the fly while the mouse is hovering over the corresponding pixel. The “on the fly” feature allows the user to identify points of interest. The user can then enter another mode in which they can select a set of points. The application will then draw the temporal profiles of the selected points, allowing a comparison of their behavior in time. This feature can be used when studying earthquakes as users can select points across a fault, allowing to have a general view of the behavior of the phenomenon at different places and times.
- Multiple windows design allows the user to visualize at the same time data in radar geometry and in standard map projection, and also to localize a zoomed-in area on the global map. A layer management system is provided to quickly access files and their metadata.
- Visualization tools commonly use aggregation methods (like e.g. smoothing, averaging, clustering) to drastically accelerate image display, but they thus induce observation and interpretation biases that are detrimental to the user. To avoid those bias, the tool focuses on keeping true to the original data and allowing the user to customize the rendering manually (colorscale, outliers selection, level-of-detail)
In our road map, we also plan to develop a new functionality to visualize interactively a network of interferograms.
We plan to demonstrate the capabilities of the InSARviz tool during the symposium.
The InSARviz project was supported by CNES, focused on SENTINEL1, and CNRS.
In forest monitoring, multispectral optical satellite data have proven to be a very effective data source in combination with time-series analyses. In many cases, however, optical data have certain shortcomings, especially regarding the presence of clouds. Electromagnetic waves in the microwave spectrum can penetrate clouds, fog and light rain and are not dependent on sunlight. Since the launch of the Sentinel-1 satellite in 2014, providing freely available synthetic aperture radar (SAR) data in C-band, interest in SAR data has started to grow, and new methods began to be developed. After the launch of the second satellite, the Sentinel-1B, in 2016, a six day repeat cycle at the equator was achieved, while in temperate regions the temporal resolution can be 2-4 days thanks to orbit overlap. On the other hand, when processing a large amount of data in time-series analyses, it is necessary to use tools that can process them effectively and quickly enough, e.g., cloud-based platforms like Google Earth Engine (GEE). However, when analyzing forests over mountainous terrain, we can encounter a problem caused by the side-looking geometry of SAR sensors combined with the effects of terrain. To correct or normalize the effect of terrain, we can use, for example, the most known and most used method for this purpose, the Radiometric Terrain Correction developed by David Small. However, this method nor any other terrain correction methods were not available in GEE. Because of that, we wanted to create an alternative method for this platform. According to the findings that there is a linear relationship between local incidence angle and backscatter and that different land cover types have different relationship, we developed an algorithm called Land cover-specific local incidence angle correction (LC-SLIAC) for the GEE platform. Using the combination of CORINE Land Cover and Hansen et al.’s Global Forest Change databases, a wide range of different LIAs for a specific forest type can be generated for each scene. The algorithm was developed and tested using Sentinel-1 open access data, Shuttle Radar Topography Mission (SRTM) digital elevation model, and CORINE Land Cover and Hansen et al.’s Global Forest Change databases. The developed method was created primarily for time-series analyses of forests in mountainous areas. LC-SLIAC was tested in 16 study areas over several protected areas in Central Europe. The results after correction by LC-SLIAC showed a reduction of variance and range of backscatter values. Statistically significant reduction in variance (of more than 40%) was achieved in areas with LIA range >50° and LIA interquartile range (IQR) >12°, while in areas with low LIA range and LIA IQR, the decrease in variance was very low and statistically not significant. Six case studies with different LIA ranges were further analyzed in pre- and post-correction time series. Time-series after the correction showed a reduced fluctuation of backscatter values caused by different LIAs in each acquisition path. This reduction was statistically significant (with up to 95% reduction of variance) in areas with a difference in LIA greater than or equal to 27°. LC-SLIAC is freely available on GitHub and GEE, making the method accessible to the wide remote sensing community.
After requests from the GEE community, a new version of the algorithm was developed and uploaded to the GitHub repository, the LC-SLIAC_global, which can be used globally using the Copernicus Global Land Cover Layers, not only for countries in the European Union. Currently we are testing the LC-SLIAC algorithm in forests in tropical areas (in Vietnam) and next plans are to compare the results achieved in temporal and tropical forests, compare the achieved results using LC-SLIAC with similarly oriented methods, apply it for long-term time-series analysis of forest disturbances and subsequent recovery phases. Then to explain the reason of the short-term fluctuations of backscatter in time series – so test the influence of external and internal factors and to test radar polarimetric indices for change detection in long-term time series analyses.
Note: the original study based on the LC-SLIAC algorithm (except for the global version) was published in Remote Sensing journal (DOI: https://doi.org/10.3390/rs13091743).
Sentinel-1, the SAR satellite family of the Copernicus program, provides the scientific community with global and recurring Earth Observation data for free. However, SAR images are subject to speckle, a form of noise that makes visual interpretation difficult.
By compensating for this drawback and leveraging the strengths of SAR imaging, it is possible to detect structures hidden by a forest canopy, even when optical imagery yields no results.
Speckle is generally reduced using spatial techniques, like multi-looking or spatial filtering. However, they decrease the (already poor) spatial resolution of the picture. Temporal speckle filtering is an alternative. A temporal mean over a (small) stack of images of the same scene will drastically reduce speckle without any degradation of the spatial resolution. Large enough structures should then be visible even when under a forest canopy.
Additionally, when trying to detect buildings, the contrast between even small static structures and variable targets (like the forest canopy) is increased. This further demonstrates that in this context, temporal speckle filtering is an improvement on spatial filtering.
By then computing the difference between the ascending and descending points of view of Sentinel-1, it is possible to further highlight hidden buildings. The technique will color western and eastern facing parts of structures and terrain (i.e., positive and negative differences) using a different color. Flat horizontal surfaces (i.e., near-zero difference) will also appear with a different color.
The technique was used over several known archaeological sites in the Guatemalan jungle. Nakbe, in particular, illustrates well the value of our method. Optical images show little indication of the presence of structures, except possibly the top of one of the pyramids and what might be a clearing. Once processed, the SAR image reveals quite clearly two large buildings and several small ones.
A map of the site found in an archaeological paper confirms the presence and positions of the structures, but also that not all are detected. This may be due to the state of conservation of the different buildings: the map might be representing the site as it was when it was built instead of as it is now.
The impact of anthropogenic climate change and pressures on water resources will be significant in the oases of the Northern Sahara but there is a paucity of detailed records and a lack of knowledge of traditional water management approaches in the long term. Landscapes emerge through complex, interrelated natural and cultural processes and consequently encompass rich data pertaining to the long-term interactions between humans and their environments. Landscape heritage plays a crucial role in developing local identities and strengthening regional economic growth.
Remote sensing technologies are increasingly being recognised as effective tools for documenting and managing landscape heritage, especially when used in conjunction with archaeological data. However, proprietary software licenses limit access to broader community growth and implementation. Conversely, FOSS (free and open-source software) geospatial data and tools represent an invaluable alternative mitigating the need for software licensing and data acquisition, a critical barrier to broader participation. Freeware cloud computing services (e.g. Google Earth Engine - GEE) enable users to process data and create outputs without significant investment in the hardware infrastructure. GEE platform combines a multi-petabyte catalogue of geospatial datasets and provides a library of algorithms and a powerful application programming interface (API). The highest resolution available in GEE (up to 10 m/pixel) is offered by the Copernicus Sentinel-2 satellite constellation, which represents an invaluable free and open data source to support sustainable and cost-effective landscape monitoring. In this research, GEE has been employed via the Python API in Google Colaboratory (commonly referred to as “Colab”). This Python development environment in this research runs in the browser using Google Cloud. Python has proven to be the most compatible and versatile programming language as it supports multi-platform application development. Also, it is continuously improved thanks to the implementation of new libraries and modules.
The GEE-enabled Python approach used in this research aims to assess the desertification rate in the oasis-dominated area of the Ourzazate-Drâa-Tafilalet regions of Morocco. Desertification is an environmental problem worldwide and is one of the most decisive changing factors in the Moroccan landscape, especially in the oases in the south-eastern part of the country. This region is well known for its oasis agroecosystems and the earthen architectures of its Ksour and Kasbahs, where oases have been supplied by a combination of traditional water management systems including ‘seguia’ canals and ‘khattara’ (groundwater collecting tunnels). The survival of the unique and invaluable landscape heritage of the region is threatened by several factors such as the abandonment of traditional cultivation and farming systems, overgrazing and increased human pressure on land and water resources. In addition, the Sahara’s intense natural expansion is changing the landscape heritage of the region rapidly.
The free and open-source Copernicus Sentinel 2 dataset and freeware cloud computing offer considerable opportunities for landscape heritage stakeholders to monitor changes. In this paper, a complete FOSS cloud procedure was developed to map the degree of desertification in the Drâa-Tafilalet region between 2015 and 2021. The Python protocol calculates the spectral index and spectral decomposition techniques to determine the Desertification Degree Index (DDI) and visually assess the effect of climate change on the landscape heritage features in the area. This has been investigated and validated in the field through field visits, most recently in November 2021.
The development of FOSS-cloud procedures such as those described in this study could support the conservation and management of landscape heritage worldwide. In remote areas or where local heritage is threatened due to climate change or other factors, FOSS-cloud protocols could facilitate access to new data relating to landscape archaeology and heritage.
Remote sensing technologies and data products play a central role in
the assessment, monitoring and protection of archaeological sites and
monuments. Their importance will only increase, as the correlated
effects of climate change, socioeconomic conflicts and unmitigated
land use are set to increase pressure on much of the world's known
and buried archaeological heritage.
In this context, the declassified satellite imagery produced by the
U.S. CORONA missions (late 1950s to early 1970s) are of particular
value. Not only are they in the public domain and obtainable at
very low cost (both of these are key factors for disciplines as starved
for resources as archaeology and cultural heritage management).
But they also represent photographic memories of some of mankind's
oldest centres of civilization, prior to the full impact of industrial
agriculture and modern infrastructural developments. In some cases,
these images are of spectacular quality, portraying ancient sites
and monuments of the Near and Middle East, Central Asia and North
Africa before the advent of modern irrigation, the construction of
hydro dams, urban sprawl and other processes that would inevitably
damage or destroy much of the global archaeological record.
While the value of historical satellite imagery has been recognized
for a long time, processing and providing these precious sources of
information at a ready-to-use level (i.e. as georeferenced and
orthorectified data products) has long been confined to local
and regional case studies. After all, there is little commercial
value in the images themselves, and customized solutions are
required to compensate for the extreme geometric distortions
produced by panoramic cameras of the CORONA missions.
More recently, however, open source GIS solutions have been developed
that allow efficient processing and publication of CORONA scene
images. These developments were made possible by a cooperation between
the German Archaeological Institute and the German GIS company
mundialis GmbH, with generous funding by the Federal Foreign
Office of Germany, resulting in a implemenation of the efficient
orthorectification of declassified CORONA satellite scene in open
source GRASS GIS that has been thoroughly tested and is now used for
mass analysis of declassified CORONA satellite scenes. The long-term
aim of these investments is to provide open methods, tools and data
products that will establish CORONA and other sources of declassified
imagery as convenient baseline products in the domains of archaeology
and cultural heritage management.
Considered by the United Nations Educational, Scientific and Cultural Organization (UNESCO) as being "irreplaceable sources of life and inspiration", cultural and natural heritage sites are essential for the local communities and worldwide, hence their safeguarding has a strategic importance for encouraging a sustainable exploitation of cultural properties and creating new social opportunities. Considering the large spectrum of threats (for example, climate change, natural and anthropogenic hazards, air pollution, urban development), cultural heritage requires uninterrupted monitoring based on a combination of satellite images having adequate spatial, spectral and temporal resolution, in-situ data and a broad-spectrum of ancillary data such as historical maps, digital elevation models and local knowledge. To date, Earth Observation (EO) data proved to be essential for the discovery, documentation, mapping, monitoring, management, risk estimation, preservation, visualization and promotion of cultural heritage. In-situ data are valuable for assessing the local conditions affecting the physical fabric (for example, wind, humidity, temperature, radiation, dust, micro-organisms), while ancillary data contribute to thorough analyses and support the correct interpretation of the results. Therefore, a reliable systematic monitoring system incorporates multiple types of data to generate exhaustive information about the cultural heritage sites.
EO also enables the unique analysis of the cultural heritage from the past (for example, by exploiting the declassified satellite imagery acquired in the 60's) until the present, in order to observe its evolution and explore the past and current human-environment interaction. Most of the scientific studies published on the topic of EO for cultural heritage are centered around the use of some remote sensing techniques for one or more similar cultural heritage sites. But considering the wealth of satellite data that is currently available, new research opportunities emerge in the area of advanced data fusion, big data analysis techniques based on Artificial Intelligence (AI) /Machine Learning (ML) and open collaborative platforms that are easy to use by the cultural heritage authorities. The current study showcases the integration of conventional methodologies such as automatic classification, change detection or multi-temporal interferometry with AI/ML algorithms for the provision of services for cultural heritage monitoring to support the effective resilience of cultural heritage sites against human or anthropogenic risks.
The complex characterization of the cultural heritage sites provided by these services represents is essential for the local and national cultural heritage management authorities due to the unparalleled knowledge provided, namely repeated, accurate and manifold information regarding, amongst others, the time evolution and the conservation state of the cultural heritage along with the early identification of potential threats and degradation risks. The proposed cultural heritage monitoring services will also facilitate the formulation and implementation of appropriate protection and conservation policies and strategies.
This work was supported by a grant of the Romanian Ministry of Education and Research, CCCDI – UEFISCDI, project number PN-III-P2-2.1-PTE-2019-0579, within PNCDI III (AIRFARE project).
During World War II, over 10,000 buildings across North Norway were burnt to the ground by the German military in a scorched earth policy. In the aftermath of the war, over 20,000 reconstruction houses, ´gjenreisningshus´, were built to rehouse the population. Based on a set of standard designs, with some variations, these homes were a new architectural style for the north, and were usually placed and aligned in a standardised way in accordance with contemporary ideas on urban design.
The University of Tromsø´s Northern Homes 21 research programme is considering these homes in a range of ways: historical and cultural, and in terms of their potential for being incorporated into the green shift through new technologies. One of the options being examined is the potential to integrate photovoltaic panels into the roofs and other locations. To provide information on the potential for solar availability, a methodology and methods are being developed to produce a database that will include: roof alignment, biological conditions, and localised elevation data to ascertain any obstructions by landforms or other structures.
Central to the research is respectful engagement with the Peoples of the North to coproduce knowledge requested by them. There will be a strong emphasis on providing communities with opportunities and support for knowledge and transfer of skills in utilising remote sensing resources.
Remote sensing data from spaceborne platforms are expected to provide significant input to the programme. Roof alignments will be extracted from sub-metre resolution imagery using machine learning methods, and Digital Elevation Models and estimates of building density will be used to estimate seasonal insolation factors. Remote Sensing data at coarser resolution will also be used to model climate interactions with the urban fabric and to characterise the urban-rural setting. Remote sensing has several potential applications to the safety assurance aspects of the Northern Homes 21 project. Regeneration of the historic built environment needs to occur within the modern legal context and the associated safety expectations. To meet these, it is hoped to utilise remote imagery of northern Norway to complement other techniques in creating a safety assurance justification for the regeneration. Remote sensing imagery would prime a comprehensive map of the NH21 properties, identifying their orientation relative to anticipated future wind directions, separation from adjacent properties and the potential combustibility of surface material on the surrounding terrain. The product of this analysis would have a number of components. First, a catalogue of properties by risk level for local fire services, to assist in the planning of fire prevention and response. Second, data to prime models for virtual firefighter training. In addition, the data would be used in the planning of new infrastructure such as external batteries and other energy storage facilities. This would identify minimum safe distances to ensure that in the case of fire, the incident heat flux on surrounding structures – particularly the wooden buildings that are the focus of this project – remains lower than the 12,6 kW/m2 value adopted in many building codes (Pesic, et al. 2018. Simulation of fire spread… Tehnicki vjesnik/Technical Gazette, 24(4)).
Scientists, engineers, polar historians, heritage scholars and other social scientists are encouraged to attend this session to gain information, establish and enhance their networks, and explore future opportunities for research.
Indigenous and peasant communities in the Andes have shaped their landscapes over millennia. In the south-central Andes’ high-altitude valleys of NW Argentina, the enduring legacy of these activities can be seen today, despite more recent landscape changes and, indeed, the visible damage to local cultural heritage created, among other, by systematic industrial activity. Predominant development and planning strategies often undermine local, indigenous and peasant priorities and perspectives on land, resources and lifeways, and ignore the long socio-environmental and cultural histories of their territories.
The 'Living Territories' research programme makes extensive and detailed use of high-resolution multispectral and topographic satellite remote sensing products, in order to characterise the extent and nature of past local human agency, and to generate systems of data about the ancient relations between people and landscapes; from agricultural and water resources, to communication and interactions, these relationships are still be relevant for local contemporary indigenous and rural population. The data collated in this way is then used in conjunction with a range of bespoke intercultural communicative and collaborative community activities in order to explore the diverse experience of the landscape as a living entity, within complex social collectives.
Our paper will focus on the methodological approach and the preliminary results of the exploratory mass-mapping exercise undertaken as part of a first, proof-of concept phase of this research programme. The resulted information will help structuring our generation of complex datasets about the ancient relations between indigenous people and landscapes, and will allow for the exploration of methods and concepts that integrate diverse forms of encoding space that prioritise local communities and their lived landscapes. Through this programme we seek to create bridges that fill the gaps between alternative experiences, perspectives, approaches, and perceptions of the landscape, in order to promote a range of inclusive public policies on cultural heritage.
In the current arena of satellite Synthetic Aperture Radar (SAR) missions, the COnstellation of small Satellites for Mediterranean basin Observation (COSMO-SkyMed) end-to-end Earth observation (EO) system of the Italian Space Agency (ASI), fully deployed and operational since 2011, represents the national excellence in space technology, not to forget its role as a Copernicus Contributing Mission. Four identical spacecrafts, each equipped with a multimode X-band SAR sensor, provide imagery at high spatial resolution (up to 1 m) and short revisit time (up to 1 day in tandem configuration), for different operational scenarios (e.g. regular acquisition of time series, on demand, emergency).
These characteristics, the consistency in interferometric acquisition parameters over long periods of time, alongside an easier accessibility owing to dedicated initiatives carried out by ASI to promote the exploitation by a wider spectrum of users [1], contributed to a significant increase in the use of COSMO-SkyMed data, also in the field of documentation, study, monitoring and preservation of cultural and archaeological heritage. While interferometric applications more rapidly attracted the interest in the geoscientific and heritage community for purposes of structural health monitoring, periodic monitoring and early warning, more efforts were required to disseminate the potentialities of COSMO-SkyMed for more traditional archaeological applications, e.g. site detection and mapping.
To this purpose, a portfolio of use-cases has been developed by ASI on sites across the Mediterranean and Middle East regions, to demonstrate the usefulness of COSMO-SkyMed data in four main domains, i.e.: archaeological prospection, topographic surveying, condition (damage) assessment, and environmental monitoring [2].
Among the main lessons learnt, it is worth highlighting that:
- COSMO-SkyMed Enhanced Spotlight data are most suited for local/site-scale investigations and fine archaeological mapping, while StripMap HIMAGE mode provides the best trade-off between high spatial resolution (less than 5 m) and areal coverage (40 km swath width);
- Regular, frequent, and consistent time series, being acquired according to a predefined acquisition plan (e.g. the Background Mission) provide an extraordinary resource for documentation of unexpected events, either of damage or related to conservation activities, that discontinuous observations definitely fail to capture, or lower spatial resolution global ones may not be able to depict with sufficient detail and scale of observation;
- Depending on the type and kinematic of the process(es) to investigate, and equally the land cover and physical properties of the targets to detect, coherence-based approaches may be more effective to delineate occurred changes, such as landscape disturbance.
These experiences not only showcase how COSMO-SkyMed can complement established archaeological research methods, but also allow the better envisioning of where the new functions (e.g. increased spatial resolution, more flexibility, enhanced polarimetric properties) now provided by COSMO-SkyMed Second Generation (CSG) can further innovate.
To expand the discussion, the present paper will also focus on two aspects (and associated applications) that have not been fully explored yet by the user community:
1. The exploitation of COSMO-SkyMed in combination with other sensors, according to the CEOS concept of “virtual constellation”, for site detection, multi-temporal monitoring and back-analysis of recent hazard events of potential concern for conservation;
2. The benefits that less used higher-level COSMO-SkyMed products, such as digital elevation models (DEMs), can bring to support specific tasks of interests for archaeologists, in integration with or as un upgrade of more established (mostly free) EO-derived DEM products.
The first topic will be demonstrated through the combination of COSMO-SkyMed images either from the Background Mission or bespoke acquisitions and Copernicus Sentinel-1 and Sentinel-2 time series, over three archaeological sites in Syria, to document otherwise unknown flooding events [3] and fires. The objective is to show how SAR and optical multispectral data from missions operating following different acquisition paradigms can be effectively exploited together, as if they were collected according to a coordinated observation scheme. Furthermore, the case studies highlight, on one side, the incredible wealth of information that is yet to be extracted from continuously growing image archives to document heritage and their conservation history; on the other, the role that thematic platforms, cloud computing resources and infrastructure can play to facilitate users to generate more advanced mapping products, regardless of their specialist expertise in SAR.
The second topic will be discussed in relation to two very recent experiences of regional-scale systematic mapping of archaeological mounds and detection of looting in Iraq. In the first case [4], the activity was carried out based on StripMap COSMO-SkyMed DEMs in comparison with the Shuttle Radar Topography Mission (SRTM) and Advanced Land Observing Satellite World 3D–30 m (ALOS World 3D) DEMs. The latter were purposely selected, given that they are the most common DEM sources used by archaeologists. In the second case, in comparison with Cartosat-1 Euro-Maps 3D Digital Surface Model made available by ESA through its Earthnet Third Party Missions (TPM) programme and the ad-hoc call for R&D applications. The demonstration highlights that, thanks to the 10 m posting and the consequent enhanced observation capability, COSMO-SkyMed DEM is advantageous to detect both well preserved and levelled or disturbed tells, standing out for more than 4 m from the surrounding landscape. Through the integration with other optical products and historical maps, the COSMO-SkyMed DEM not only provides the confirmation of the spatial location of sites known from the literature, but also allows for an accurate localization of sites that had not been previously mapped.
References:
[1] BATTAGLIERE M.L., CIGNA F., MONTUORI A., TAPETE D., COLETTA A. (2021) Satellite X-band SAR data exploitation trends in the framework of ASI’s COSMO-SkyMed Open Call initiative, Procedia Computer Science, 181, 1041-1048, doi:10.1016/j.procs.2021.01.299
[2] TAPETE D. & CIGNA F. (2019) COSMO-SkyMed SAR for Detection and Monitoring of Archaeological and Cultural Heritage Sites. Remote Sensing, 11 (11), 1326, 25 pp. doi:10.3390/rs11111326
[3] TAPETE D. & CIGNA F. (2020) Poorly known 2018 floods in Bosra UNESCO site and Sergiopolis in Syria unveiled from space using Sentinel-1/2 and COSMO-SkyMed. Scientific Reports, 10, article number 12307, 16 pp. doi:10.1038/s41598-020-69181-x
[4] TAPETE D., TRAVIGLIA A., DELPOZZO E., CIGNA F. (2021) Regional-scale systematic mapping of archaeological mounds and detection of looting using COSMO-SkyMed high resolution DEM and satellite imagery. Remote Sensing, 13 (16), 3106, 29 pp. doi:10.3390/rs13163106
The High City of Antananarivo (Madagascar), part of the UNESCO tentative List since 2016, represents the urban historical centre and hosts one of the most important built cultural heritage sites of Madagascar: the Rova royal complex as well as baroque and gothic-style palaces and cathedrals churches dating back to the XIX century. The site is built on a hilltop (Analamanga hill) elevating above the Ikopa river alluvial plain and rice fields, and is often affected by geohazards: during the winter of 2015, the twin cyclones Bansi and Chedza hit the urban area of Antananarivo, triggering floods and shallow landslides, while between 2018 and 2019, several rockfalls occurred from the hill granite cliffs and many losses; all of these phenomena caused evacuees, damage to housings and infrastructures as well as several casualties. In this complex geomorphological setting the rapid and often uncontrolled urbanization (often represented by shacks and hovels), and a not proper land use-planning (illegal quarrying, dumping and slope terracing, slash and burn deforestation, lack of a proper drainage-sewer system) can seriously exacerbate slope instability and soil erosion, posing a high risk to the High City cultural heritage and the natural landscape connected infrastructures (roads and pathways in particular).
In the recent years, thanks to the availability of the Copernicus products and new satellite missions (such as ASI PRISMA), the integration of multi- and hyperspectral data has undergone an increase of use in the field of EO for land use-cover mapping applications, for the evaluation of climate change impacts and the monitoring of geohazards. The UNESCO Chair on Prevention and Sustainable Management of Geo-Hydrological hazards is collaborating since 2017 with Paris Region Expertise (PRX), the municipality of Antananarivo and BNGRC (Bureau National de Gestion des Risques et des Catastrophes) for assessing geohazards in the High City, and therefore support the nomination of the site for the UNESCO World Heritage List. In this context the use of EO data can give an important contribution in order to face the challenges posed in the next future to this complex and fragile cultural heritage by the growing urban pressure (which trend in the last few decades is generally increasing in African developing countries) and by the environmental modifications in a context of climate change.
The aim of this work is to test the potential of Sentinel and PRISMA data for the monitoring of the High City of Antananarivo UNESCO zone and of the surrounding urban area and natural landscape. In particular, satellite multi- and hyperspectral data will be applied in a multi-scale methodology for an updated assessment of land cover-use, for highlighting areas frequently affected by flooding and prone to erosion/landsliding (e.g., bare residual and clay-rich soils, granite outcrops and abandoned quarries), for the evaluation of the urban sprawl in the Antananarivo urban area, as well as for the remote classification of the building vulnerability in the UNESCO core zone. The final goal is to implement a tailored, innovative and sustainable strategy to be shared with the institutions and actors involved in the protection of the High City of Antananarivo and used as a tool for land-use planning and management, for the detection of conservation criticalities, as well as for improving the site’s resilience to geohazards. The use of open-source data, platforms and tools can promote capacity building of local practitioners and end users (to be trained as local experts), and can facilitate the reproducibility of the methodology in other sites characterized by similar geomorphological and urban scenarios. Expected outcomes are also the improvement of the site’s touristic fruition in order to support the local economy and stimulate a community empowerment approach to sustainable heritage management.
Innovative UAV application of LIDAR for Cultural and Natural Heritage in Guatemala
The research aims to document the lidar technology utility installed on UAV beyond visual line of sight systems (BVLOS) for vast cultural landscape mapping and conservation in archaeological context. The case study illustrated is the Petén tropical forest in the so-called Maya lowland, containing, in addition to a significant ecological and biodiversity heritage, one of the most important archaeological testimony of the ancient Maya civilization, spread in the tropical forest. The use of increasingly sophisticated sensors makes it possible to have a large amount of high-resolution and accurate data. That allows post-processing of DEMs that are very useful for archaeological and geographical investigation. With this work, we want to involve the Universities that collaborate to propose the research results to wider projects concerning the empowerment of local organizations. These organizations take care of the site's maintenance or have them in concession. The research project will help them in the processes of decisions concerning the new potential sites detection and the preservation of those already excavated by a series of environmental and anthropogenic threats, which archaeologists have repeatedly denounced in their excavation campaigns. That would also greatly help increase the knowledge, use, and safety of the sites, some of which are impenetrable due to the presence of dense vegetation that hides the archaeological remains. However, the Lidar penetrates through the vegetation through its lasers in our case with three pulses with FOV of 70 °. It is thus possible to obtain a DEM that gives us the topography of the places, subtracting the ground surface from the height of the canopy and shrubs. The most complex process is interpreting these data, which could give concrete indications as misleading on the presence or absence of archaeological remains. Therefore, it is essential to use not only the parameters from lidar data for the different heights of the sites overflown but also a whole series of parameters that allow us to differentiate the different reflectance values and therefore hypothesize the presence or absence of an archaeological vestige. In the research, we document other possible applications useful for the geographic context investigated. Thick layers of earth and vegetation cover the pyramids, which continuously decay and grow back into the foliage component. This type of vegetation is, in fact, protection for the pyramids; it protects them from the erosion of the rains but at the same time becomes a biological and mechanical degradation factor. Many local scholars have asked themselves about the problem of vegetation management, also recognizing that stealing the tons of earth that cover some pyramids would involve the enormous expenditure on the part of the government. With the Lidar, we can calculate the volume of vegetation that covers the pyramids, thus giving indications on where and when to intervene. Continuous flights could monitor the environmental conditions in which the archaeological remains exist, preserving these places, which are so fragile and strong at the same time, from erosion and other ecological and anthropogenic threats. The research is conducted by two universities, as part of the Ph.D. in Spatial Archeology, and a German Agency that will provide the use of the drone that we will describe in the presentation and the expertise to pilot it. The poster will indicate the main technical Lidar parameters that distinguish the photogrammetric mission thus planned in Guatemala and the expected results.
Carolina Collaro
Nowadays, Cultural Heritage is more and more endangered due to a wide list of factors. Climate change consequences, as sudden and heavy rains and floods, together with ground deformation and buildings deterioration, are increasingly frequent worldwide. The monitoring of climate change consequences is crucial since they definitively constitute new increasing threat, especially in areas not used to those destructive phenomena. However, the daily monitoring of cultural landscapes is essential, not only for underground features detection but also for the understanding of natural and human induced changes during centuries.
The present work focuses on a series of SAR multi-frequency and multi – incidence angle analysis integrated with Optical change detection techniques for multi-temporal monitoring of archeological sites land cover and detection of archaeological features according to the stratigraphic patterns of the selected cultural heritage sites.
Sentinel-1 (C-band), ALOS PALSAR (L-band), RADARSAT-2 (C-band) sensors as starting set of SAR data will be used specially for monitoring and identification of surface and subsurface archaeological structures. While some of those data (ALOS PALSAR) offer a good historical reference (2005 to 2010), Sentinel-1 time series provide recent and systematic monitoring opportunities. Copernicus Sentinel-2 and additional high-resolution optical EO data from ESA contributing missions will be used for characterizing the effects caused by different type of hazards affecting cultural areas of interest. By detecting land change use over time, performing unsupervised classification, spectral index and visual inspection, the analysis will focus on: i) structures erosion due to sandstorms, ii) flood mapping, iii) structures collapse due to extreme precipitation. The derived information will be then integrated in a dedicated GIS together with ancillary data as historical aerial photographs, cartography, geologic and archaeological maps. cultural heritage sites have been selected: Gebel Barkal and the sites of the Napatan region (Sudan, site property: 1,8 square km; buffer zone: 4,5 square km), Villa Adriana (Italy, site property: 0,8 square km; buffer zone: 5 square km), respectively inscribed in the UNESCO World Heritage List from 2003 and 1999, the archaeological area of Pompeii (Italy, site property: 1 square km; buffer zone: 0,25 square km) inscribed in the UNESCO World Heritage List since 1997.
The purpose of the work is to demonstrate how a multi-disciplinary approach can contribute to the identification of a scalable methodology that can be applied worldwide, in an epoch where satellite data exploitation seems not to be an exhaustive tool for the preservation of cultural landscapes.
Remote sensing for Cultural Heritage is not a novel research field, and an unequivocal method capable of an automatic detection of archaeological features is still not existing: the potential of such complex and multidisciplinary study for monitoring and safeguarding purposes can support local governments in delivering better solutions for the management of cultural landscapes, resulting in savings from the maintenance activities, and to better plan and address economic resources to the proper mitigation and preservation measures.
This presentation aims to consider the potential of Copernicus’ Sentinel-2 and Sentinel-5P missions to estimate the effect of climate change on cultural heritage. Undoubtedly, heritage across the globe is under various constraints resulting from a range of human-induced processes that can be observed in different regions. However, the IPCC 2021 Report leaves no doubt that climate change has become one of the most pressing issues on the scientific agenda. Two intertwined points emerging from this report require particular emphasis. First, widespread, rapid, and intensifying changes in every region of the Earth call for a global strategy for risk assessment. Second, undisputed human influence on the climate requires efficient methods to monitor greenhouse gas emissions. The EU Earth Observation Programmes addressed those issues by launching missions to generate data records that ensure autonomous and independent access to reliable information around the globe.
Climate change and related events (severe weather events, air pollution, etc.) have been recognised for some time as factors affecting natural and cultural heritage. The UNESCO’s statistical analysis of the state of conservation of world heritage properties (2013) includes major factors that were described in the IPCC report as “multiple different changes caused by global warming”, such as more intense rainfalls and associated flooding, sea-level rise and coastal flooding, etc. Local monitoring systems were also applied to observe changes that were caused by these events. However, the application of remote sensing data for cultural heritage protection and management has not yet been explored to its full extent. We can safely assume that majority of archaeological applications of satellite imagery has been focused on processes that can be directly (visually) observed in data. Events such as the aforementioned flooding can be reasonably easily identified and its effect accurately estimated using relatively simple tools. But how can we approach processes that go beyond the visible spectrum and how to evaluate their effect on cultural heritage?
Recent advancements in remote sensing provide a range of analytical tools that helps translate satellite data into physical changes in the climate and their effect upon societies and ecosystems. Cultural heritage may require a different set of ‘translating tools’ that will help understand the effect of climate change not on living organisms and/ or ecosystems but material structures. Using case studies that will explore Sentinel-2 for land cover changes and Sentinel-5P for air pollution, we will address this conceptual and methodological gap. We will demonstrate issues that arise from attempts to adjust methods that have been developed for natural areas and/ or living organisms to cultural heritage sites. We also intend to provide a workflow to process data (particularly Sentinel-5P) in the cultural heritage context. Overall, we will argue for the need to move from site-oriented and local-scale monitoring towards global monitoring system for cultural heritage that will explore more thoroughly the potential of the Copernicus missions.
Nowadays, Cultural and Natural Heritage are more and more endangered due to a wide list of factors. Climate change consequences, as sudden and heavy rains and floods, together with ground deformation and human activities consequences, are increasingly frequent worldwide. In particular, marine landscapes and protected areas are widely ignored and less monitored due to daily difficulties in monitoring the legal and illegal vessel traffic, and they are at risk of human induced hazards derived by vessel daily activities and traffic: let imagine tankers cleaning or disasters affecting the natural habitats and `areas close to coasts. The evaluation of maritime traffic is then the main impacting factor for those natural areas in open seas and coasts. Unfortunately, the use of satellite images only is not enough valuable for those type of monitoring activities and several data sources need to be integrated and properly identified to support decision makers and planners worldwide.
In the frame of the PLACE initiative, the present work focuses on setting the basis of a tool that take into consideration several data sources at European scale and provide a set of information layers for decision makers and planners, taking into account also the natural impact on the marine environment caused by maritime traffic.
The main data sources for this study are Sentinel-1 (C-band) at different polarizations VV and VH and both Ascending and Descending orbits, European marine vessel density maps from the European Marine Observation and Data Network” (EMODnet), European marine Natural protected areas identified in Natura 2000, OSM maps, QGIS for raster and vector data visualization and overlays and Google Earth Engine to process time series of Sentinel 1 data. The idea is to generate and combine different information layers in order to have a clear understanding of which are the natural protected areas affected by the maritime vessel traffic in European fragile sites and demonstrate the scalability of the technologies used from local to Regional and Worldwide scale.Two Local, one Regional and One European use cases have been identified: at local scale the UNESCO site of the Venetian Lagoon and its Adriatic coast, and the Valencia Baleares Sea sites. At Regional scale the Nord Sea area and at European scale the European Coast. Based on these preliminary use cases studies, we started from the combined generation of the sea lanes by computing the maximum of each pixel across a time series of Sentinel-1 images through the use of Google Earth Engine catalogue and processing capabilities. Then, the traffic map using GEE has been imported in QGIS and compared to European marine vessel density maps (for Tankers and Cargos), and overlaid with the European Marine Natural protected areas maps and OSM maps. The information gathered allows to identify the most congested routes and the most impacted areas in order to provide valuable information layers to decision makers in maritime and coast planning, to better address economic resources to the proper mitigation measures for the preservation of natural sites.
The effects of climate change, rising urbanisation, tourism and conflicting land uses, among
others, threaten both cultural and natural heritage around the world. Given the value of
cultural and natural heritage, all available technologies and tolls should be put in place to
ensure their valorisation and safeguard. Recognising this necessity, the European Commission
together with the Council and the European Parliament, agreed on establishing a European
Year of Cultural Heritage in 2018 (EYCH2018) which drew attention to the opportunities
offered by the European cultural heritage as well as the challenges it faces. This fostered
discussions on the opportunity to create a dedicated Copernicus service for cultural heritage
and also fostered discussions on how new technologies and digital services can support the
renaissance of Cultural and Creative Industries (CCIs) in Europe.
In 2018 Eurisy launched the “Space4Culture” initiative, aimed at fostering the use of satellite
technologies to monitor, preserve and enhance cultural heritage. The Space4Culture initiative
intends to give an overview of the different perspective and interests which shape the field of
space applications in the cultural and creative domains. Eurisy comes in to find new user
communities and acts as a facilitator and a matchmaker, with the conviction that it is not
enough to bring space to people or to new user communities: it is about acting as a “space
integrator” or a “space broker”. In 2018, on the occasion of EYCH2018, Eurisy implemented a
two-days conference on this topic, showcasing how operational satellite services support the
management of historical cities, provide crucial information to safeguard heritage and
enhance the creation of innovative cultural and artistic experiences.
The success stories collected by Eurisy show the distinctive added-value of satellite
applications to identify and study cultural heritage sites, to monitor natural heritage sites, and
to assess and prevent potential damage, be it man-made or a consequence of climate change
and geo-hazards.
Satellites can represent a game-changer for cultural heritage management. Therefore, it is
fundamental to make satellite data more easily available to public administrations and to raise
awareness on the profitability of investments in the aerospace field to also benefit sectors
which one might not think of. However, it is also crucial to make sure that the research conducted by universities and space agencies effectively reach public administrations in charge of managing heritage. At the same time, such administrations shall
be duly involved in the development of new satellite-based services targeting natural and
cultural heritage, and their operational needs and procedures should be taken into account.
In addition, there is the need for a holistic approach on the management of cultural and
natural heritage that brings together entrepreneurs, researchers, space agencies and
European institutions, and the political authorities responsible for managing heritage at the
local level. Eurisy is eager to stimulate such dialogue and to showcase its innovative approach
in fostering the development and use of satellite-based applications to better manage and
safeguard heritage. To do this, the association makes available articles, case-studies and
videos showcasing testimonials from cultural and natural heritage managers at the local and
regional levels.
The project entitled ‘’SpaCeborne SAR Interferometry as a Noninvasive tool to assess the vulnerability over Cultural hEritage sites (SCIENCE)’’ introduce the InSAR techniques to cultural heritage sites protection.
The four cultural heritage sites that are examined are: a) the Acropolis of Athens and b) the Heraklion City Walls in Crete (Greece) and c) the Ming Dynasty City Walls in Nanjing and d) Great Wall in Hebei and Beijing (China).
In the framework of SCIENCE project, the state-of-the-art techniques of multitemporal Synthetic Aperture Radar Interferometry (MT-InSAR) are applied for the detection of ground deformation in time and space. These remote sensing techniques are capable to measure deformation’s impact with millimetric accuracy. The MT-InSAR techniques that are used are: the Persistent Scatterers Interferometry (PSI), the Distributed Scatterers Interferometry (DSI) and the Tomography-base Persistent Scatterers Interferometry (Tomo-PSInSAR). Supplementary to the radar data, is used high-resolution optical data is used for the identification of the persistent scatterers.
The main datasets that are used are: a) open access ERS-1 & 2 and Envisat SAR datasets, Copernicus SAR datasets (Sentinel-1A & B) and third part mission high resolution SAR datasets (TerraSAR-X Spotlight and Cosmo-SkyMed), b) the optical datasets of Pleiades 1A and Pleiades 1B (with spatial resolution up to 0.5m), GF-2 (with spatial resolution up to 0.8m) and Senitnel-2 (with spatial resolution up to 10m).
Moreover, the validation of the interferometric results is taking place through a) in-situ measurements in terms of geological and geotechnical framework and b) data associated with the cultural heritage sites’ structural health.
In addition, SCIENCE project is a result of the bilateral cooperation between the Greek delegation of Harokopio University of Athens, the National Technical University of Athens, the Terraspatium S.A., the Ephorate of Antiquities of Heraklion (Crete), the Acropolis Restoration Service (Athens) of Ministry of Culture and Sports and the Chinese delegation of Science Academy of China (Institute of Remote Sensing and Digital Earth) and the International Centre on Space Technologies for Natural and Cultural Heritage (HIST) under the auspices of UNESCO (HIST-UNESCO).
Concluding, SCIENCE introduces the creation of a validated pre-operation non-invasive system and service for risk assessment analysis of cultural heritage sites including their surrounding areas. Such a service could be very beneficial for institutions, organizations, stakeholders and private agencies that operate on the cultural heritage protection domain.
The detection and assessment of damages caused by violent natural events, such as wildfires and floods, is a crucial activity for estimating losses and providing a prompt and efficient restoration plan, especially in cultural and natural heritage areas. Considering major wildfire or flood events, a typical assessment scenario consists of the retrieval of post-event EO-based imagery, derived from aerial or satellite acquisitions, to visually identify damages and disruptions. The challenge of this task typically resides in the complex and time-consuming activity carried out by domain experts. Usually, assessments are produced manually, by analyzing the available images and, when possible, in-situ information. We automated these tasks, by implementing a ML-based pipeline able to process satellite data and provide a delineation of flooded and burned areas, given a specific region and time interval as input. Sentinel-1 and Sentinel-2 satellite imageries from ESA Copernicus Programme have been exploited to respectively train and validate flood and burned areas delineation models. Both the approaches are based on state-of-the-art segmentation networks and are able to generate binary masks in a given area and time interval. An extensive experimental phase was carried out to optimize hyperparameters, leading to optimal performances in both the flood mapping and the burned areas delineation scenarios.
One of the objectives of the Rapid Damage Assessment service proposed here is the detection and delineation of burned areas, caused by wildfire events. Our approach consists of a deep learning that performs a binary classification to estimate the areas affected by the forest fire. The model obtains an average F1-score of 0.88 on the test set. Another main objective of the Rapid Damage Assessment service is the delineation of flooded areas, caused by the overflow of water basins. To tackle this task, we implemented a deep learning solution which utilizes pixel-wise binary classification of an image. Several training iterations of models have been tested, starting from different datasets and architectures and the average F1-score produced is 0.44.
The Rapid Damage Assessment service is currently deployed within the SHELTER (Sustainable Historic Environments holistic reconstruction through Technological Enhancement and community-based Resilience), an ongoing project funded by the European Union's Horizon 2020 research and innovation programme. The project aims at developing a data driven and community-based knowledge framework that will bring together the scientific community and heritage managers with the objective of increasing resilience, reducing vulnerability, and promoting better and safer reconstruction in Historic Areas.
Among the different Copernicus-based solutions developed in the context of the SHELTER project, the above-mentioned services represent the most mature ones, but further developments are foreseen. The different Copernicus core services in fact have already internally the relevant sources of satellite imagery (such as the Sentinels and the Contributing missions), models and in-situ data sources to cover a large part of the user requirements expressed by cultural and natural heritage user communities. Nevertheless, the development of specific products and/or adaptation of existing ones is needed to respond to specific requirements of the SHELTER use cases.
The risk to cultural and natural and heritage (CNH) as a consequence of natural hazards and impact of climate change is globally recognized. The assessment and monitoring of these effects impose new and continuously changing conservation activities and urgently needs for innovative preservation and safeguarding approach, particularly during extreme climate conditions.
The present contribution aims at illustrating the “Risk mapping tool for cultural heritage protection” specifically dedicated to the safeguarding of CNH exposed to extreme climate changes, developed within the Interreg Central Europe STRENCH (2020 - 2022), which development is strongly based on a user-driven approach and the multidisciplinary collaboration among the scientific community, public authorities and the private sector (https://www.protecht2save-wgt.eu/).
The “risk mapping tool” provides hazard maps in Europe and in the Mediterranean Basin where CNH is exposed to heavy rain, flooding and prolonged drought. Risk level is assessed by the elaboration of extreme changes of precipitation and temperature performed using climate extreme indices defined by the Expert Team on Climate Change Detection Indices (ETCCDI) and by integrating data from:
1) Copernicus C3S ERA5 Land products (~9 km resolution, from 1981 at monthly/seasonal/yearly time scale).
2) Copernicus C3S ERA5 products (~31 km – 0.25° resolution, from 1981 at seasonal time scale).
3) NASA GPM IMERG products (10 Km resolution, from 2000 at seasonal time scale).
4) Regional Climate Models from the Euro-CORDEX experiment under two different scenarios (RCP4.5 and RCP8.5) (12 Km resolution, 2021–2050 and 2071-2100).
5) State-of-the-art observational dataset E-OBS (25 Km resolution, 1951-2016).
The tool allows users to rank the vulnerability at local scale of the heritage categories under investigation taking into account 3 main requirements: susceptibility, exposure and resilience. The functionalities of the “risk mapping tool” are currently under testing at European case studies representative of cultural landscape, ruined hamlets and historic gardens and parks.
The application of Copernicus C3S, Earth Observation-based and products and their integration with climate projections from regional climate models constitutes a notable innovation that will deliver a direct impact to the management of CNH, with high potentiality to be scalable to new sectors under threat by climate change.
By the achievement of the planned objectives, STRENCH is expected to proactively target the needs and requirements of stakeholders and policymakers responsible for disaster mitigation and safeguarding of CNH assets and to foster the active involvement of citizens and local communities in the decision-making process.
Current straightforward access to remote sensing data for archaeological research provided by Open platforms, such as Copernicus, is putting the spotlight on the urgency of developing or advancing automated workflows able to streamline the examination of such data and unearth meaningful information from them. Automated detection of ancient human footprint on satellite imagery has seen so far limited (although promising) progress: algorithms developed to this end are usually specific for single-object categories or for a few categories, but show limited accuracy. This strongly limits their application and restricts their usability to other contexts and situations.
Advances in fine-tuning workflows for the automatic recognition of target archaeological features are being trailed within the framework of Cultural Landscapes Scanner (CLS) Project, a collaborative project involving the Italian Institute of Technology and ESA. This project tackles the shortcomings of site-specific algorithms by developing novel and more generic AI workflows based on a deep encoder/decoder neural network that exploits the availability of a large number of unlabelled EO multispectral data and addresses the lack of a priori knowledge. The methodology is based on the development of an encoder/decoder network that is pre-trained on a large set of unlabelled data. The pre-trained encoder is then connected to another decoder and the network is trained on a small, labelled dataset. Once trained, this network enables the identification of various classes of CH sites requiring only a small set of labelled data.
The experimental results on Sentinel multispectral datasets shows that this approach achieves performance close to the methods tailored for detecting only one object categories, while improves the identification accuracy in detecting different classes of CH sites. The novelty of this approach lies in the fact that it addresses the lack of both a priori knowledge and deficiencies of labelled training information, which are the prime bottlenecks that prevent the efficient use of Machine learning for the automatic identification of buried archaeological sites.
The Copernicus Programme has revolutionized Earth Observation and the uptake of the EO data among public and private users. It is now becoming a foundation of the EU's leadership in the global Sustainability Transformation and monitoring of ambitious environmental and security goals.
However Copernicus' potential can only be fully exploited with complementary data sources, including commercial data. Many of these needs are already met through the Copernicus Contributing Missions (CCM), but this programme does not fully exploit the range of data that New Space companies can provide.
To quantify the benefits of using additional commercial data, we performed a Cost Benefit Analysis (CBA) of the implications for European policy were the EC to directly use commercial data to monitor progress against objectives, taking advantage of the improved resolution and higher cadence.
Although all aspects of EU policy were considered, the key focus was on the European Green Deal, for which EO data has important roles in monitoring land use changes, farming practices, soil degradation, biodiversity and other key parameters.
Cost-Benefit Analysis (CBA) is a systematic approach to be used to compare completed or potential courses of actions, or to estimate the value against the cost of a decision, project, or policy. CBA analysis has long been a core tool of public policy and is used across the EU institutions. It helps decision makers to have a clear picture of how society would fare under a range of policy options for achieving particular goals. This is particularly the case for the development of environmental policy, where CBA is central to the design and implementation of policies in many countries.
In this case, the aim of the CBA was to determine in a quantified way the benefits and added value of universal direct access to commercial data by the relevant stakeholders at European level, compared against a baseline of Sentinel data plus Copernicus Contributing Missions. The focus of the study was on high cadence optical Very High Resolution (more specifically VHR2) data.
To achieve this, the use of EO data at European Level (European Commission (EC), EU agencies and entrusted entities) was analysed. A range of case studies were selected for detailed analysis, allowing us to build a picture of how improved data translated into benefits to the end user. This was then used to inform a macro analysis of the benefits to Europe as a whole, including both monetary and non-monetary benefits.
The outputs of this study capture where commercial EO data can best help the EC to meet its green deal objectives, complementing the existing Copernicus data and services, and may also provide useful inputs for future needs of the Copernicus programme.
There is a lot of talk about making EO data more accessible, but not much is said about the real obstacle here: cost. By driving down the cost of the data, it becomes more affordable to more entities, be it channel partners and resellers, small governments, humanitarian aid and disaster relief organizations, commercial tech start-ups, research and academic institutions. And this is one of the many factors Satellogic has changed to drive industry and end-user adoption of EO data.
Our aim is to empower innovation across the public and private sectors, enabling more end-users to access and leverage the power of EO data, and thus, developing new solutions for improved outcomes like food security, sustainable agriculture, environmental conservation and restoration, public safety, and other Earth Observation missions.
With unrivaled unit economics, we manufacture and operate our satellites at a much lower rate than competitors; each built with three core capabilities including high-resolution multispectral imagery, hyperspectral imagery, and full-motion video. It also uniquely positions us to rapidly generate more satellites for increased capacity and frequency—we project to have 300+ in orbit by 2025 for daily remaps of the entire planet. This unique capability will empower greater, more timely decisions as well as consistency for collaborative projects.
Our Aleph platform will increase access via web application or API integration and features differentiated pricing to help organizations get the data they need within budget. In alignment with our mission to democratize access to Earth Observation data, pricing is dynamically determined by end-use and capability constraints
We believe collaboration is key, which is why we are working with companies like ConnectEO and EUROGI to increase access across borders, markets, and industries. By making our Earth Observation data more affordable and accessible, we enable more organizations to leverage geospatial intelligence to develop innovative new solutions to tackle the world’s most pressing problems.
Low-lying lands are highly vulnerable to sea-level changes, storm surges and flooding. Changes in the water table associated with excess rain or droughts can also impact sanitation conditions, potentially leading to disease outbreaks, even in the absence of floods.
In the ESA WIDGEON (Water-associated Infectious Diseases and Global Earth Observation in the Nearshore) project, one of the study areas is the coastal district of Ernakulam in Kerala, India. Ernakulam, low-lying, bordering the sea and criss-crossed by the waters of the Vembanad Lake and wetland system, and home to the biggest city in Kerala (Kochi), is prone to frequent flooding, storm surges and fluctuations in the water table. These extreme events can lead to mixing of sewage, for example from septic tanks, with the lake and coastal waters. Our earlier studies have shown that these waters have high levels of bacterial pollution, in particular from Vibrio cholerae and Escherichia coli, both showing antibiotic resistance to multiple antibiotics. In this context, it is important to improve the sanitation practices to build resilience, as well as to put in place robust mitigation measures in the event of extreme events. To this end, we have been developing a smart phone application which will enable the people living in vulnerable areas to enter their health and sanitation information to an online depository using their smart phones. The information collected can be used to develop a sanitation map for the region. In the event of natural disasters, the citizens would then be able to update their sanitation and health information immediately, using their mobile phones, such that the dynamically updated maps can be used to direct mitigation measures to the most susceptible areas.
Our plan is to use this simple and cost-effective method as a contribution to building a flood-resistant Kerala. Success of the endeavour would depend very much on communication between the scientists designing the experiment, the citizen scientists contributing the data, and the government and non-governmental bodies engaged in mitigation measures. It is also important for the citizens to realise that they are part of developing a system that would be beneficial to them in the long run.
EO4GEO, the Erasmus+ Sector Skills Alliance for the space/geoinformation sector has developed an ontology-based Body of Knowledge (BoK) over the past 4 years. This BoK is in practice covering the Earth Observation and Geoinformation (EO*GI) professional domain, much less the upstream part of the space sector (Hofer et al., 2020). It contains concepts – theories, methodologies, technologies, and applications … - that are relevant for the domain and that needs to be covered, amongst others, in education and training activities. The BoK does not only contain those concepts, but also a short abstract or description, the author(s) or contributor(s), the required knowledge and skills in terms of learning outcomes and external references (books, papers, training modules, etc.). Furthermore, the concepts are also related to each other where relevant. Relationships are variable and include ‘sub-concept-of’, ‘pre-requisite’, ‘similar’, etc. (Stelmaszczuk-Górska et al., 2020). The information in the BoK forms the basis for the design of curricula including learning-paths, the annotation of documents such as job descriptions and CV’s, the definition of occupational profiles and much more. An ecosystem of tools has been developed for doing so.
The BoK describes in a certain sense the knowledge base for the EO*GI domain which is, in its own right, a relatively vast domain. But it certainly does not exist in isolation. The sector is by default linked to and intertwined with many other domains that influence each other: engineering, informatics, mathematics, physics, and many other fields. Other technologies (e.g. information science) and businesses & applications (sectorial activities such as maritime transport, insurance, security, agriculture, etc.) are very relevant as well, and influence what happens in the sector. Because the world is continuously changing, the sector is changing too, and so does the knowledge, skills and competencies that are required to help answering the world problems and challenges we face today (Miguel-Lago et al., 2021). As a result, the BoK is a living entity that is continuously evolving.
Figure 1: The EO*GI Science & Technology domain (Vandenbroucke, 2020, based on diBiase et al., 2006)
In the current version of the BoK for EO*GI, the EARSC taxonomy which defines the common ‘language’ of the European Remote Sensing companies has been integrated, strongly linked to their thematic and market view on the domain (EARSC, 2021). So the BoK is certainly not only a scientific, but also a practical tool. Moreover, the aim of the BoK for EO*GI is not to integrate all the concepts of these other domains - that would be a ‘mission impossible’ - but rather try to connect to other BoK’s, vocabularies or ontologies where possible, and vice versa to convince other domains to use a similar approach to describe their domain. In the course of the EO4GEO lifetime, several other sectors have already shown interest in developing an own BoK. The International Cartographic Association (ICA) showed interest, as did the University Consortium on Geographic Information Science in the US (UCGIS). Both are active in the EO*GI field. Also other sectors have shown interest: the European Defence (ASSETs+)) and Automotive sectors (DRIVES), as well as the eGovernment sector that is dealing with the Digital Transformation of Governments (European Commission, 2021).
The idea has grown to evolve towards a series of interconnected vocabularies and ontologies using a similar approach and sharing the same tools. In that way each community can develop their own BoK, but also referring to each other’s concepts, to relevant references, etc. For example, the automotive sector could detail aspects related to Intelligent Transport Systems (ITS) which are related to and interesting for the EO*GI sector as well. Instead of developing that sub-domain in the BoK for EO*GI, it could connect to the BoK of the Automotive domain, as well as to the Positioning Navigation and Timing (PNT) ontology currently developed by ESA.
The paper will present the BoK for EO*GI, its content, as well as how it is maintained through the Living Textbook (LTB) tool. It also presents the results of an extensive exercise to use the same environment for the location enabled Digital Government Transformation (DGT) domain (eGovernment) for which an ontology-based Knowledge Graph has been developed. This was done by using the same environment and text mining tools to identify concepts, definitions and relationships. Moreover, a semi-automated approach was used to search for and identify synonyms (and hyponyms and hypernyms) in other glossaries, vocabularies and ontologies to enrich the Knowledge Graph. It is believed that the resulting interconnected BoK’s will better describe the EO*GI field and will enrich the EO*GI knowledge base.
References
DiBiase, D., DeMers, M., Johnson, A., Kemp, K., Luck, A. T., Plewe, B., Wentz, E., 2006. Geographic Information Science and Technology Body of Knowledge. Association of American Geographers and University Consortium for Geographic Information Science. Washington http://downloads2.esri.com/edcomm2007/bok/GISandT_Body_of_knowledge.pdf (accessed on 8 December 2021).
European Association for Remote Sensing Companies (EARSC) (2021). EO Taxonomy. https://earsc-portal.eu/display/EOwiki/EO+Taxonomy.
European Commission (2021). European Location Interoperability Solutions for e-Government (ELISE) Action, part of the ISA² programme, ran by the Joint Research Center. https://joinup.ec.europa.eu/collection/elise-european-location-interoperability-solutions-e-government/about
Hofer, B., Casteleyn, S., Aguilar‐Moreno, E., Missoni‐Steinbacher, E. M., Albrecht, F., Lemmens, R., Lang, S., Albrecht, J., Stelmaszczuk-Górska, M., Vancauwenberghe, G., Monfort‐Muriach, A. (2020). Complementing the European earth observation and geographic information body of knowledge with a business‐oriented perspective. Transactions in GIS, 24(3), 587-601. https://doi.org/10.1111/tgis.12628
Miguel-Lago, M., Vandenbroucke, D. and Ramirez, K. (2021). Space / Geoinformation Sector Skills Strategy in Action. Newsletter of EO4GEO: http://www.eo4geo.eu/.
Stelmaszczuk-Górska, M.A., Aguilar-Moreno, E., Casteleyn, S., Vandenbroucke, D., Miguel-Lago, M., Dubois, C., Lemmens, R., Vancauwenberghe, G., Olijslagers, M., Lang, S., Albrecht, F., Belgiu, M., Krieger, V., Jagdhuber, T., Fluhrer, A., Soja, M.J., Mouratidis, A., Persson, H.J., Colombo, R., Masiello, G. (2020). Body of Knowledge for the Earth Observation and Geo-information Sector - A Basis for Innovative Skills Development. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B5-2020, 15–22, https://doi.org/10.5194/isprs-archives-XLIII-B5-2020-15-2020.
Vandenbroucke, D. (2020). On ontology-based Body of Knowledge for GI and EO. Presentation at the joint 2nd EO Summit (EO4GEO) and Eyes-on-Earth Road Show.
Due to its unique combination of excellent global coverage (daily, swath width 2600 km) and relatively high spatial resolution (7x7 km2) the Sentinel-5-Precursor (S5P) satellite with its TROPOMI instrument is a game changer for global atmospheric observations of the greenhouse gas methane. As shown in several peer-reviewed publications, the S5P methane observations provide important information on various methane sources such as oil and gas fields. Two groups have developed retrieval algorithms which have been used to generate multi-year data sets of column-averaged dry-air mole fractions of atmospheric methane, denoted XCH4 (in ppb), from the S5P spectral radiance measurements in the Shortwave-Infrared (SWIR) spectral region. SRON has developed RemoTeC, which is used to produce the operational Copernicus XCH4 data product publicly available via the Copernicus Open Access Hub (https://scihub.copernicus.eu/). The second algorithm is the Weighting Function Modified DOAS (WFMD) algorithm of the Institute of Environmental Physics (IUP) of the University of Bremen (IUP-UB). WFMD, which has initially been developed for SCIAMACHY, has been further developed and optimized for scientific S5P XCH4 retrievals in the context of the ESA Climate Change Initiative (CCI) project GHG-CCI+ (https://climate.esa.int/en/projects/ghgs/). The S5P WFMD XCH4 data products are also publicly available (e.g., https://www.iup.uni-bremen.de/carbon_ghg/products/tropomi_wfmd/). Here we present comparisons of XCH4 data products generated by the two different algorithms. We focus on regions showing locally elevated XCH4. These comparisons have been carried out primarily in the context of ESA project Methane+ (https://methaneplus.eu/). Most of the regions showing locally elevated XCH4 in the S5P data sets are known major source regions of atmospheric methane. However, for some regions we have also identified potential problems of the satellite retrievals, for example, due to so far unaccounted spectral dependencies of the surface reflectivity. We show that the use of more than one data product helps to distinguish localized methane enhancements originating from local emission sources from erroneous enhancements caused by issues of the currently used retrieval algorithms. This is important in order to reliably detect and quantify methane emissions originating from major anthropogenic and natural methane sources, which is relevant for emission monitoring activities related to, for example, the UNFCCC Paris Agreement on climate change.
The microwave radiometers are part of the NASA Atmosphere Observing System (AOS) mission which could also incorporate Radar, Lidar, Spectrometers and Polarimeters. One of the goals of the mission is to characterize 1) the vertical flow of hydrometeors at different altitudes in convective systems, as well as the horizontal dimensions of the different parts composing these systems, and 2) the water vapour profile, from a non-Sun-synchronous orbit (similar to Global Precipitation Measurement mission’s orbit), with a 55° inclination, in the 2028-2033 timeframe.
From a space segment perspective, CNES proposed to NASA to contribute to AOS mission providing two similar passive microwave radiometers embarked on a train of 2 satellites.
The microwave sounder SAPHIR-NG is a cross-track scanning total power microwave radiometer measuring the earth radiations in three main bands including a total of ten discrete frequency channels, ranging from 89GHz to 325.15GHz. It is designed to measure atmospheric humidity as well as hydrometeor profiles and integrated content.
The 89GHz quasi-window measurement is very useful for precipitation measurements.
The atmospheric opacity spectrum shows water vapour absorption lines centred around 183.31GHz and 325.15GHz. The measurement at these frequencies will enable to estimate the water vapour vertical profile in clear sky conditions and to evaluate the hydrometeor vertical profiles in convective cells for the channels a little bit further from the absorption line. The sounding principle consists of selecting channels in order to get the maximal sensitivity to water vapour and ice particles at different altitudes.
In addition to the humidity profiles retrieval under clear-sky and oceanic situations, evolutions of the hydrometeor vertical profiles will be characterized through the information provided by a train of 2 radiometers delayed by a time interval of few minutes (typically between 30 s and 4 minutes). The acquisition of the radiance time derivative around the two absorption lines at 183GHz and 325GHz will enable to characterize the evolution in time of the hydrometeor vertical content in the convective systems and thus to analyse the condensed water flux cycles. The 89 GHz channel provides a measurement of the precipitation cells whose signatures are often strongest in this high microwave frequency window.
The SAPHIR-NG instrument has a direct heritage from its predecessor SAPHIR, embarked on the Megha-Tropiques satellite, the MicroWave Imager (MWI) and Ice Cloud Imager (ICI), both part of the MetOp-Second Generation mission.
The instrument collects the radiation coming from the Earth by means of a rotating antenna, composed of a parabolic reflector and a Quasi-Optical Network. The rotation of the antenna performs the side-looking scan. The Earth brightness temperature is acquired at an angle of +/- 43° in azimuth. Every rotation, two other angular sectors are used to calibrate the measurements. First, the antenna collects the energy coming from the cold sky, and then looks at a fixed microwave calibrated target providing the receivers with a known and stable input noise power.
The required stringent radiometric sensitivity implies having the receivers as close as possible to the horns to reduce the receiver temperature, and thus implementing some of the receivers in separate blocks. The purpose of the receivers is to deliver signals, the magnitude of which is proportional to the incoming microwave power in the relevant band (i.e. brightness temperature of the scene). The linearity of the radiometer is ensured by the 2 points (hot and cold) calibration process. Depending on specific channel requirements and technical constraints, direct detection or heterodyne configurations are used.
The Instrument Control Unit (ICU) mainly performs the power distribution and the receivers signal digitization. A hyperspectral processing is studied as an instrument option and could provide 256 frequency channels in a 4GHz bandwidth around the 183GHz and 325GHz absorption lines.
The specification for co-location and co-registration of the pixels implies the use of a Quasi-Optical Network (QON). Another advantage of the QON design is that it will minimize the RF losses between feed-horns and RF receivers, by means of free space channels splitting.
Finally, the scan mechanism (composed of a Mechanical Drive Equipment and Scan Control Mechanism) insures the rotation of the reflector.
The present paper will provide an overview of the SAPHIR-NG instrument objectives and design, through the instrumental architecture, and present the performance prediction assessment.
In the framework of the Swarm Data, Innovation, and Science Cluster, precise science orbits (PSO) are computed for the Swarm satellites from on board GPS observations. These PSO consist of a reduced-dynamic orbit to precisely geotag the magnetic and electric field observations, and a kinematic solution with covariance information, which can be used to determine the Earth’s gravity field. In addition, high resolution thermospheric densities are computed from on board accelerometer data. Due to accelerometer instrument issues, these data are currently only available for Swarm-C. For Swarm-A, a first data set will also become available soon, which is limited to the early mission phase. Therefore, also GPS-derived thermospheric densities are computed. These densities have a lower temporal resolution of about 20 minutes, but are available for all Swarm satellites during the entire mission. The Swarm density data can be used to study the influence of solar and geomagnetic activity on the thermosphere.
We will present the current status of the processing strategy that is used to derive the Swarm PSO and thermospheric densities and show recent results. For the PSO, our processing strategy has recently been updated and now includes a more realistic satellite panel model for solar and Earth radiation pressure modelling, integer ambiguity fixing and a screening procedure to reduce the impact of ionospheric scintillation induced errors. Validation by independent Satellite Laser Ranging data shows the Swarm PSO have a high accuracy, with an RMS of the laser residuals of about 1 cm for the reduced-dynamic orbits, and slightly higher values for the kinematic orbits. For the thermospheric densities, our processing strategy includes a high-fidelity satellite geometry model and the SPARTA gas-dynamics simulator for gas-surface interaction modelling. Comparisons between Swarm densities and NRLMSIS model densities show noticeable scaling differences, which indicates the potential of the Swarm densities to contribute to thermosphere model improvement. The accuracy of the Swarm densities is dependent on the aerodynamic signal size. For low solar activity, the error in the radiation pressure modelling becomes significant, especially for the higher-flying Swarm-B satellite. In a next step, we plan to further improve the Swarm densities by including a more sophisticated radiation pressure modelling.
Swarm is the magnetic field mission of the ESA Earth Observation program composed of three satellites flying in a semi-controlled constellation: Swarm-A and Swarm-C flying in pairs and Swarm-B at an higher altitude. They carry a sophisticated suite of magnetometers and other in-struments: the ASM (Absolute Scalar Magnetometer) and VFM (Vector Field Magnetometer), the Electric Field Instrument (EFI) and an Accelerometer (ACC).
Since early on during the mission, the goal for the Swarm lower pair was to orbit in similar low-eccentricity orbits separated by a small Right Ascension of the Ascending Node, in very close orbital planes and separated along the orbit by a distance between 4 and 10 seconds. This interval was identified as a compromise between the need to control the constellation, ensure the proper reaction time and avoid crossovers and the need to keep them close enough to correlate the science data.
Swarm-B instead is orbiting at an higher altitude (currently 507 km average altitude compared to 432 km of the lower pair) and, due to different orbital perturbations its plane is rotating at a different speed, although being quasi-polar like Swarm-A and Swarm-C.
Due to the orbital planes’ different rotation rates, there is a periodic point in time when the planes come so close that they almost are co-planar. This exciting opportunity comes every 7.5 years and happened between Summer and Winter 2021, the closest alignment being at the be-ginning of October, 2021. In this phase, called “counter-rotating orbits phase”, Swarm-B is counter-rotating with respect to Swarm-A and Swarm-C in very close orbital planes.
That is why, in order to grasp every ounce of science data out of this orbital configuration, it was decided to investigate and tune also the lower pair along-track separation during the “coun-ter-rotating orbits” phase.
The first phase, in the Summer, was to decrease the separation from the [4;10] seconds to the lower band, i.e. closest as possible to 4 seconds.
Then, for a period of 2 weeks close in time to the closest-plane-alignment, the along-track sepa-ration was decreased to only 2 seconds, corresponding to around 15 km: this configuration was applauded by the Swarm scientific community, due to the science that will be made out of this “pearl on a string” scenario, but implied an intensive work and planning, analysis and mitigation measures undertaken by the Flight Operations Segment at ESOC, both by the Flight Control and the Flight Dynamics teams. It was paramount to ensure keeping the 2 seconds separation at all times and react quickly to any anomaly that could jeopardize it or, even worse, be a risk for the safety of the constellation.
With the third phase, also the interest of the scientists in studying the Earth co-rotating phenomena was taken into account: the lower pair separation was gradually and linearly increased from 4 to a maximum of 40 seconds until mid-December 2021, before the return to the original configuration.
The poster will describe not only the basics of the Swarms orbital configuration, but the journey of the counter-rotating orbits in particular, and the challenges of the closest 2seconds separation, showing how it was possible, from an planning and operational point of view, to play with the lower pair distance such as to achieve different scenarios that will be a diversified sensing input for the Swarm science community for years to come.
The Swarm mission provides thermosphere density observations derived from the GPS receiver data for all three satellites and, as a separate data product, from the accelerometer data for the Swarm A and C satellites. Deriving thermosphere density observations requires the isolation of the aerodynamic acceleration by reducing the radiation pressure acceleration from the non-gravitational acceleration. Uncertainties in the radiation pressure modelling represent a significant error source at altitudes above 450 km, in particular when solar activity is low. Since the Swarm satellites spent several years at such high altitudes during periods of very low solar activity, improvements in radiation pressure modelling are expected to yield a substantially higher accuracy of the thermosphere density observations, in particular for the higher-flying Swarm B satellite.
In order to improve the radiation pressure modelling, it is crucial to account for the detailed geometry and the thermal radiation of the satellites. The former is achieved by augmenting the high-fidelity geometry model of the Swarm satellites with the thermo-optical properties of the surface materials. The augmented geometry models are then analysed using ray-tracing techniques to account for shadowing and multiple reflections (diffuse and specular), which is not the case for commonly used methods based on panel models. Another important factor which we want to address in this study is the sensitivity of the thermosphere density observations to errors in thermo-optical surface properties, i.e. errors in the coefficients for specular and diffuse reflection, and absorption, which are not accurately known and might change over time due to aging effects of the surface materials.
The thermal radiation can be calculated directly using the in-situ measurements from thermistors that monitor the temperature in a number of locations on the outer surfaces of the satellites. Whilst this is expected to give the most accurate results, it also offers the opportunity to optimize a recently developed thermal model of the satellite. The model consists of a set of panels that heat up by absorbing incoming radiation and cool down by emitting radiation. It can be optimized by adjusting its control parameters, which are the heat capacitance of the panels, the thermal conductance towards the inner satellite, and the internal heat generation from the electronics, batteries, etc. Such an optimised thermal model is expected to provide valuable insights for other missions, such as the CHAMP, GRACE, and GRACE-FO missions, for which thermistor measurements are not publicly available. While the positive effect on density observations is most pronounced at higher altitudes, we anticipate that at lower altitudes crosswind observations will benefit.
In our presentation, we will show how to improve the radiation pressure modelling by (1) using the detailed geometry model of the Swarm satellites and (2) accounting for the thermal radiation. Further, we will determine the impact of radiation pressure mismodelling on the thermosphere density observations. This analysis could help resolve critical issues such as errors in Swarm B data (manifested by negative density observations), which are currently addressed by providing extra information about the orbit-mean density. Additionally, other missions such as CHAMP, GRACE, and GRACE-FO could benefit from a knowledge transfer, which will make a significant portion of the thermosphere observations more reliable.
Ever since the Swarm mission was launched in 2013, Swarm mission data has been produced systematically up to Level 2 (CAT2) within the ESA Archiving and Payload Data Facility (APDF). In parallel to the nominal operations, the L1b and L2CAT2 processing algorithms undergo constant improvement and new Instrument Processing Facility (IPF) versions are released whenever the Swarm Data, Innovation, and Science Cluster (DISC) team has approved stable algorithms. With every new major IPF release, a complete reprocessing of the Swarm mission data is required before a new baseline can be published to the end user. It is carried out in a dedicated environment and in individual reprocessing campaigns. Since the time of initial operation, two successful reprocessing campaigns were completed this way and a third campaign is being executed to reprocess the full amount of 8 years of mission data.
As the reprocessing of the full mission data is a computing resource intensive task, the reprocessing environment of the Swarm APDF is equipped with scalable processing nodes in a cluster streamlined for high load with parallel processing of the IPFs, optimized quality control and report generation for monitoring purposes.
Following the demands of the reprocessing campaigns, the IPF executables have been optimized for parallel operation by removing dependencies on previous day input and external licenses so that they can be scaled linearly in order to achieve the required throughput.
With a design that makes it scalable, configurable and robust, the APDF software additionally supports smooth and successful execution of the reprocessing.
The reprocessing environment makes use of up to 30 L1b Magnet processing instances and 110 L2CAT2 IPF instances in parallel, which are spread over 10 virtual machines in ESA ESRIN's cluster infrastructure.
This setup in combination with the related system optimizations can achieve a very high throughput in reprocessing 3 months of operational L1b data in one day and one year of L2CAT2 in one day.
The overall success of the Swarm reprocessing campaigns can further be attributed to the close collaboration of all teams involved. The APDF system evolutions are based on the operations team's direct needs, which are formulated and communicated to the system maintainers in short communication loops following an agile method. This process, too, is supported by the underlying APDF software with its high configurability and overall robustness.
In conclusion, the Swarm reprocessing campaigns suit to serve as a role model for other missions when it comes to the cost-effective introduction of system changes and an effective execution of change procedures with only a small overhead.
The scaling of field-aligned current sheets (FACs) connecting different regions of the magnetosphere can be explored by multi-spacecraft measurements, both at low (LEO) and high altitudes. With the relation to (R1/R2) and (sub-)auroral boundaries (mapping to current distributions at the magnetopause and ring current and regions in between) such distributed current measurements can assist in future combination with SMILE data and are also enhanced by added LEO coverage, such as is planned with NanoMagSat. Individual events, sampled by higher altitude spacecraft (e.g. Cluster, MMS), in conjunction with Swarm or other LEO satellites, show different FAC scale sizes. Large and small-scale (MLT) trends in FAC orientation can also be inferred from dual-spacecraft (e.g. the Swarm A&C spacecraft). Conjugate effects seen in ground magnetic signals (dH/dt, as a proxy for GICs) and spacecraft (e.g. Cluster/Swarm) show intense variations take place in the main phase of a geomagnetic storm (e.g. cusp response) and during active substorms (e.g. driven by arrival of bursty bulk flows, BBF). The most intense dH/dt is associated with FACs, driven by BBFs at geosynchronous orbit (via a modified substorm current wedge, SCW). Previous demonstration of directly driven dB/dt by bursty bulk flows (BBFs) at geosynchronous orbit has been rare. In situ ring current morphology can be investigated by MMS, THEMIS and Cluster, using the multi-spacecraft curlometer method, and linked to LEO signals via R2-FACs and the effect on the internal geomagnetic field. These in situ measurements suggest the ring current is a superposition of a relatively stable, outer westward ring current, dominating the dawn-side, and closing banana currents due to a peak or trough of plasma pressure in the afternoon and night-side sectors (depending on geomagnetic activity). The transport relationship between these two banana currents via (R2) FACs can be investigated with spacecraft at LEO.
Geomagnetic daily variations at mid and low-latitudes are generated by electric currents in the E-region of the ionosphere, around 110 km altitude. As part of the Swarm level 2 project, we developed a series of global, spherical harmonic models of quiet-time, non-polar geomagnetic daily variations from a combination of Swarm and ground-based measurements. The latest model, Dedicated Ionospheric Field Inversion 6 (DIFI-6), was released in November 2021. It includes almost eight years of Swarm data providing excellent local time, longitudinal and seasonal coverage, and was extensively tested and validated. DIFI-6 can be used to predict geomagnetic daily variations and their associated induced magnetic fields at all seasons and anywhere near the Earth surface and at low-Earth orbit altitudes below +/-55 degree latitudes. In a second phase of this project, we investigated the year-to-year variability of ionospheric currents in relation with internal magnetic field changes such as, e.g., the slow movement and shape change of the magnetic dip equator. We used the DIFI algorithm to calculate models of non-polar geomagnetic daily variations over a three-year sliding window running through the CHAMP satellite era (2001-2009) and the Swarm era (2014-2021). The obtained models span almost two solar cycles and a period during which the main magnetic field intensity changed by as much as 5% in some locations. They confirm the main features previously observed in the DIFI models, including strong seasonal and hemispheric asymmetries and the anomalous behavior of the Sq current system in the American longitudinal sector. We also find that the total Sq current intensity might have decreased over twenty years in the American longitudinal sector. During the same time period, the dip equator moved northwest by about 500 kilometers. Whether or not both changes are related remains to be confirmed. Future satellite-based magnetic field data collection by Swarm and other low-Earth orbit missions such as, for example, NanoMagsat, will be key in improving our understanding and modeling of non-polar geomagnetic daily variations.
Machine learning (ML) techniques have been successfully introduced in the fields of Earth Observation, Space Physics and Space Weather, yielding highly promising results in modeling and predicting many disparate aspects of the Earth system. Magnetospheric ultra-low frequency (ULF) waves play a key role in the dynamics of the near-Earth electromagnetic environment and, therefore, their importance in Space Weather studies is indisputable. Magnetic field measurements from recent multi-satellite missions are currently advancing our knowledge on the physics of ULF waves. In particular, Swarm satellites have contributed to the expansion of data availability in the topside ionosphere, stimulating much recent progress in this area. Coupled with the new successful developments in artificial intelligence, we are now able to use more robust approaches for automated ULF wave identification and classification. Here, we present results employing various neural networks (NNs) methods (e.g. Fuzzy Artificial Neural Networks, Convolutional Neural Networks) in order to detect ULF waves in the time series of low-Earth orbit (LEO) satellites. The outputs of the methods are compared against other ML classifiers (e.g. k-Nearest Neighbors (kNN), Support Vector Machines (SVM)), showing a clear dominance of the NNs in successfully classifying wave events.
As part of the identical scientific payloads of the Swarm satellites, the electrostatic accelerometers are aimed to estimate non-gravitational forces acting on each satellite, as needed in the near-Earth space environmental studies. The hybridized non-gravitational accelerations could be constructed, using the GPS receiver data for the lower frequency range and the accelerometer data for the higher frequency range. Such a synergy was successfully realized and resulted in the calibrated non-gravitational along-track accelerations of the Swarm C satellite (Level 2 products ACCxCAL_2) for the full mission time, starting from February 2014. However, Swarm A Level 2 accelerations are recently released only for the first year of mission, and the Swarm B accelerometer data are still unavailable. Nevertheless, the one-year overlap of the released Swarm C and Swarm A Level 2 accelerations for the first time allows to exploit the planned constellation benefits for the thermospheric studies.
Because of unexpected and intensive data anomalies at the Level 1B, the considerable processing efforts are required to maintain the Level 2 accelerations at the acceptable quality level. Therefore, processing of Swarm accelerations differs essentially from that of other missions. This presentation provides details on the processing algorithms and data quality assessment as needed for the Swarm accelerometer data users. Special attention is given to anomalies analysis, triggered by external impacts from the environment and/or spacecraft micro-seismic events, and generated possibly because of the after-launch hardware mechanical damages or other instrumental issues. The following data anomalies will be discussed: random and systematic abrupt bias changes (steps); regular discharge-like spikes, which are spatially correlated in a form of specific patterns of lines and spots; impulse noise and resonant harmonics in electronics; temperature-induced slow bias changes; damages or signal inversions at the eclipse entries; non-nominal reference signal partitioning during the calibration maneuvers. With an improved understanding of the sensor behavior, Swarm accelerometers collect a valuable information as a technology demonstrator for future satellite missions.
Estimating the susceptibility and depth to the bottom of the magnetic layer is an ill-posed problem. Therefore, assumptions of one of the parameters have to be made in order to estimate the other. Here, we apply a linearized two-step Bayesian inversion approach based on the Monte Carlo Markov chain sampling scheme to invert magnetic anomaly data over Australia considering independent estimates of the bottom of the magnetic layer from heat flow estimates. The approach integrates the ‘fractal’ description used in the spectral approaches by a Matérn covariance function and point constraints derived from heat flow data. In our inversion, we simultaneously solve for the susceptibility distribution and the thickness of the magnetic layer.
As input magnetic field, we combine the aeromagnetic data of Australia with the recent satellite magnetic model, LCS-1, by a regional spherical harmonic method based on a combination of an equivalent diploe layer and spherical harmonic analysis. The data are presented in various heights from 10 – 400 km in order to minimize local scale features and to maximize sensitivity to the thickness of the magnetic layer. As constraint, we use estimates of the magnetic layer based on measurements of geothermal heat flow and crustal rock properties. Hereby, we assume that the Curie isotherm does coincide with the deepest magnetic layer. We systematically explore the effect of increasing model resolution and of the geothermal heat flow values. Hereby, we consider the spatial distribution of geothermal heat flow values and consider their accuracy and quality. First result show, that if not sufficient constraints are provided, the inversion cannot outperform simple interpolation. However, we also study how heat flow constraints from seismic tomography models can complement the geothermal heat flow constraints.
Swarm is an ESA Earth Explorer mission launched on 2013 with the purpose of measuring the geomagnetic field and its temporal variations, the ionospheric electric fields and currents as well as plasma parameters like density and temperature. The aim is to characterise these phenomena for a better understanding of the Earth’s interior and its environment.
The space segment consists of a constellation of three identical satellites in near-polar low orbits (Swarm Alpha, Bravo and Charlie) carrying a set of instruments to achieve the mission objectives: a Vector Field Magnetometer (VFM) and an Absolute Scalar Magnetometer (ASM) for collecting high-resolution magnetic field measurements; three star trackers (STR) for accurate attitude determination; a dual-frequency GPS receiver (GPSR) for precise orbit determination; an accelerometer (ACC) to retrieve measurements of the satellite’s non-gravitational acceleration; an Electric Field Instrument (EFI) for the plasma and electric field related measurements, composed of two Langmuir Probes (LPs) and two Thermal Ion Imagers (TIIs).
The science data derived from instruments on board Swarm are processed by the Swarm Level 0, Level 1A and Level 1B operational processors operated by the Swarm Ground segment. The generated products are continuously monitored and improved by the ESA/ESRIN Data quality Team of the Swarm Data Innovation and Science Cluster (DISC).
This poster focuses on presenting the current status and performances of the EFI instruments and related L1B PLASMA data products.
The latest data validation activities and results are presented, along with several tests run in-orbit performed to improve data quality, and near-future validation and calibration plans. In particular, it will present the most significant payload investigations, performed since the beginning of the mission up to the most recent initiatives which aim at improving Swarm data science quality.
Moreover this work means to present potential long-term future improvements both concerning instruments and processor performances, including current studies and tests carried on in order to identify the best way forward for the evolution of the mission.
Changes in the global ocean circulation driven by winds and density gradients produce, via motional induction, time-varying geomagnetic signals. On long length and time scales these signals are hidden beneath larger core-generated signals but, due to their location at Earth's surface, on sufficiently short length scales and considering month to interannual timescales they may in principle be detectable. Such signals would provide useful information related to ocean circulation and conductivity variations. We explore the prospects for retrieving these signals using forward simulations of the magnetic signals generated by an established ocean circulation model (the ECCO model v4r4), realistic ocean, sediment, lithosphere and mantle electrical conductivities and the ElmgTD time-domain numerical scheme for solving the magnetic induction equation, including both poloidal and toroidal parts. We show that considering 4-monthly averaged signals the oceanic magnetic secular acceleration beyond spherical harmonic degree 10 may reach detectable levels. The impact of realistic data processing and time-dependent field modelling strategies on the retrieved synthetic ocean signal will be described. Progress on synthetic tests including both core and oceanic sources will be reported. The benefit of improved temporal coverage by future geomagnetic missions, particularly the proposed NanoMagSat mission, together with the importance of suitable representations of the oceanic signal, will be described.
Researchers making use of Swarm data products face several challenges. These range from discovering, accessing and comprehending an appropriate dataset for their research question, to forwards evaluation of various geomagnetic field models, to combining their analysis with external data sources. To help researchers embarking on this journey and to facilitate more open collaboration, Swarm DISC is defining and building new *tools* and *services* that build upon the existing data retrieval and visualisation service, *VirES for Swarm*.
Given Swarm's large data product portfolio and diverse user community, there is no "one size fits all" solution to provide an analysis platform. A sustainable, modular framework of smaller tools is needed, leveraging the wider open source ecosystems as much as possible. To answer this, we are developing Python packages that can be used by researchers to write their own reproducible research code using well-tested algorithms, Jupyter notebooks that guide them in this process, and web dashboards that can give rapid insights. These are supported by the *Virtual Research Environment (VRE)* that provides the free pre-configured computational environment (JupyterHub) where such code can be executed.
We provide the Python package, *viresclient*, that provides the connection between the VirES API and the Python environment, delivering customised data on-demand with a portable code snippet. On top of this, we are building Python tools that can apply specific analyses (such as cross-correlation of measurements between spacecraft for studies of field-aligned currents and ionospheric structures), which can be used by researchers in an open-ended way. In-depth documentation and tutorials are critical to make these tools accessible and useful, while an open source and community-involved focus should bring them longevity.
With this presentation, we present the current status of Swarm DISC activities in relation to tools and services, and guidance on how to navigate and provide feedback on these. Please also see our poster "VirES & VRE for (not only) Swarm" (Martin Pačes & Ashley Smith)
Swarm is the fifth mission in ESA’s fleet of Earth Explorers consisting of three identical satellites launched on 22 November 2013 into a near-polar, circular orbit. The mission studies the magnetic field and its temporal evolution providing the best-ever survey of the geomagnetic field and near-Earth space environment through precise measurements of the magnetic signals from Earth’s core, mantle, crust and oceans, as well as from ionosphere and magnetosphere.
Two satellites (Swarm Alpha and Swarm Charlie) form the lower pair flying side-by-side with a ~1.4° separation in longitude at an altitude decaying from ~460 km and at 87.4° inclination angle while the other satellite (Swarm Bravo) is cruising at a higher orbit with an altitude decaying from ~510 km and an inclination of 87.7°.
The three spacecraft are equipped with the same set of instruments: a Vector Field Magnetometer (VFM) for high-precision measurements of the magnetic field vector, an Absolute Scalar Magnetometer (ASM) to measure the magnitude of the magnetic field and to calibrate the VFM, a Star Tracker (STR) assembly for attitude determination, an Electric Field Instrument (EFI) for plasma and electric field characterization, a GPS Receiver (GPSR) and a Laser Retro-Reflector (LRR) for orbit determination and an Accelerometer (ACC) to measure the Swarm satellite’s non-gravitational acceleration in its respective orbit.
In this contribution we present an overview of the Swarm ASM, VFM and STR instruments status after seven years of operations. We also focus on the improvements which have been recently introduced in the L1B magnet data processing chain as well as on payload investigations and Cal/Val activities conduced to improve science quality.
Finally, this poster will provide an outlook of the long-term future evolutions in the data processing algorithms, with a particular focus on data quality improvements and their expected impact on scientific applications, and providing a roadmap for future implementations.
VirES for Swarm (https://vires.services) started as an interactive data visualization and retrieval interface for the ESA Swarm mission data products. It includes tools for studying various geomagnetic models by comparing them to the Swarm satellite measurements at given space weather and ionospheric conditions. It also allows location of conjunctions of the Swarm spacecrafts.
The list of the provided Swarm products has been growing over the time and it currently includes MAG (both, LR and HR), EFI, IBI, TEC, FAC, EEF, IPD, AEJ, AOB, MIT, IPP and VOB products as well as the collection of L2 SHA Swarm magnetic models, all synchronized to their latest available versions. Recently, the list of products has been extended also by calibrated magnetic field measurements from the CryoSat-2, GRACE and GRACE-FO missions, and 1s, 1min, and 1h measurements from the INTERMAGNET ground observatories. The VirES service no longer exclusively serves only Swarm products.
VirES provides access to the Swarm measurements and models either through an interactive visual web user interface or through a Python-based API (machine-to-machine interface). The latter allows integration of the users' custom processing and visualization.
The API allows easy extraction of data subsets of various Swarm products (temporal, spatial or filtered by ranges of other data parameters, such as, e.g., space weather conditions) without needing to handle the original product files. This includes evaluation of composed magnetic models (MCO, MLI, MMA, and MIO) and calculation of residuals along the satellite orbit.
The Python API can be exploited in the recently opened Virtual Research Environment (VRE), a JupyterLab based web interface allowing writing of processing and visualization scripts without need for software installation. The VRE comes also with pre-installed third party software libraries (processors and models) as well as the generic Python data handling and visualization tools. A rich library of tutorial notebooks has been prepared to ease the first steps and make it a convenient tool for a broad audience ranging from students and enthusiasts to advanced scientists.
To make the Swarm products accessible by a larger scientific community, VirES also serves data via the Heliophysic API (https://github.com/hapi-server/data-specification), a community specification defining a unified interface for retrieval of the time-series data.
Our presentation focuses on the evolution of the VirES & VRE services and presentation of the most recent enhancements.
The plasma of the ionosphere is abundant with small-scale (100-200 km) irregularities that may result in the distortion and loss of radio signals of GNSS satellites, thus the corruption of ground-based GPS measurements. The plasma irregularities are accompanied by scale-dependent turbulent fluctuations in the magnetic field. Within the framework of the recently finished EPHEMERIS project, we carried out the quasi real-time monitoring of possible occurrences of nonlinear magnetic field irregularities along the orbits of the Swarm satellite triplet. Statistical analysis was applied. It was conjectured that intermittent turbulent plasma fluctuations involved the non-Gaussian behaviour of probability density functions (PDF) of the corresponding physical parameters (magnetic field, plasma density, temperature, etc.).
In the presentation we analyse the temporal and spatial distribution of the nonlinear irregularities of the high-pass filtered field-aligned (i.e., compressional) and transverse magnetic field fluctuations. It is shown that the most intensive irregularities in the transverse field appear near the auroral oval boundaries, as well as close to the plasmapause. On the other hand, it is also revealed that compressional and transverse fluctuations exhibit intermittent behaviour also about the dip equator, symmetrically near the 10° latitude in both hemispheres. The latter finding is the consequence of equatorial spread F (ESF) or equatorial plasma bubble (EPB) phenomena. The study also concerns the space weather consequences of the detected magnetic field irregularities. First, we investigate the correlation between GPS signal loss events experienced onboard Swarm satellite and the irregular state of the ionosphere plasma. Secondly, we study the influence of irregularities on GNSS radio signal distortions via the processing of amplitude and phase scintillation records of ground GNSS stations. It is shown that radio signals are clearly distorted by the magnetic irregularities detected in the equatorial region, while this coincidence is not undoubtedly demonstrated near the plasmapause and the auroral oval boundaries. It is conjectured that the controversial findings can be explained by the different origin of the observed magnetic irregularities at low and high latitudes, that is plasma depletion near the Equator, while field aligned currents or plasma waves in the high-latitude region.
Satellites of the ESA Swarm mission carry Absolute Scalar Magnetometers (ASM) that nominally provide 1 Hz scalar data of the mission and allow the calibration of the relative vector data independently provided by the VFM fluxgate magnetometers also on board. Both the 1 Hz scalar data and the VFM calibrated vector data are being distributed as the nominal L1b magnetic data of the mission. ASM instruments, however, also provide independent 1 Hz experimental self-calibrated ASM-V vector data. More than seven years of such data have been produced on both the Alpha and Bravo Swarm satellites since the launch of the mission in November 2013. As we will illustrate, having recently undergone a full recalibration, these data have now been substantially improved, correcting for previously identified systematic issues. They allow the construction of very high quality global geomagnetic field models that compare extremely well with models built using nominal L1b data (to within less than 1 nT RMS at Earth’s surface, 0.5 nT at satellite altitude). This demonstrates the ability of the ASM instruments to operate as a stand-alone instrument for advanced geomagnetic investigations. Having been fully validated, these ASM-V experimental data are now already being distributed to the community upon request (see Vigneron et al., EPS, 2021, https://doi.org/10.1186/s40623-021-01529-7 and https://swarm.ipgp.fr/).
Both Swarm Alpha and Bravo still having each a spare redundant (cold-redundancy) ASM on board, and the currently operating ASM on both satellites being in good shape with no sign of ageing, ASM instruments are precious assets for allowing many more years of both nominal 1 Hz scalar data and experimental ASM-V vector data to be acquired by Swarm Alpha and Bravo in the future, offering the possibility to go on monitoring the field for many more years, even in the event the VFM instruments should face issues. Furthermore the now demonstrated performance of the ASM instrument running in vector mode fully validates its operating mode in space, on which is also based a new miniaturized version of the instrument, known as the Miniaturized Absolute Magnetometer, which can operate on nanosatellites and is currently planned to be flown as part of the payload on the NanoMagSat constellation proposed as a Scout ESA NewSpace Science mission.
This submission discusses Swarm data products relevant for space weather monitoring delivered to ESA's payload data ground segment (PDGS) by GFZ German Research Centre for Geosciences through the ESA’s Swarm data, innovation, and science cluster (DISC) activities. These Swarm products address phenomena in the magnetosphere-ionosphere-thermosphere system, e.g., the auroral electrojet and auroral boundaries (Swarm-AEBS, https://earth.esa.int/eogateway/activities/swarm-aebs) and the plasmapause related boundaries in the topside ionosphere (Swarm-PRISM, https://earth.esa.int/eogateway/activities/plasmapause-related-boundaries-in-the-topside-ionosphere-as-derived-from-swarm-measurements) families derived from Swarm in-situ measurements. They include information on latitudinal profiles, peak current densities and boundaries of the auroral electrojet, as well as indices to locate the plasmapause, being the boundary of the plasmasphere. The ongoing Swarm DISC project topside ionosphere radio observations from multiple low Earth orbit (LEO)-missions (TIRO, https://earth.esa.int/eogateway/activities/tiro) will also deliver space weather related products from the CHAMP (2000-2010), GRACE (2002-2017) and GRACE-FO (since 2018) missions. In combination, these products form long-term series (two solar cycles) of GPS derived total electron content (TEC) from CHAMP, GRACE and GRACE-FO and in-situ electron density from the k-band ranging instrument (KBR) from GRACE and GRACE-FO. Products from the CHAMP and GRACE missions will be delivered as historical data and from the GRACE-FO mission as operational products.
Satellite CASSIOPE (CAScade, Smallsat and IOnospheric Polar Explorer, a made-in-Canada small satellite from the Canadian Space Agency), was launched in September 2013 and with the ePOP (The Enhanced Polar Outflow Probe) payload meanwhile supposed to act as an additional satellite in the Swarm constellation, as Swarm-Echo. In focus here are the data from the MGF instrument (magnetic field). The MGF group lead by David Miles are preparing a new calibrated, full data set in a Swarm L1b CDF lookalike format. The three test periods of MGF 1 Hz test data (for 2016, 2019 and 2021) delivered in late summer 2021 were examples periods -- distinguished by the failures of a first and a second attitude control wheel. This poster will try to evaluate the quality and features of any new MGF data sets going to be available and compare data on good and disturbed, older and newer periods. Challenging will be the yet in detail uncharted influence of the satellite itself -- and the status reached in respect of the crucial attitude control of the satellite after the failure of the second wheel. First task will be a mostly technical look into properties and quality of the available data (in focus are are the distribution of the given flags and their link to the data quality, stage of calibration and housekeeping records). With the help of the dual-magnetometer MGF configuration (the sensors are mounted in different distances to the satellite body on a, even short boom) the stray field sources of the satellite itself can be probed, in particular the power system like battery current or solar cell currents and voltages seems significant. In a second task, the limits of the given MGF-data usability are to be explored -- in combination and in comparison with other Swarm magnetic field readings -- for dedicated inversion tasks, presumably helping on covering local times: may be to support characterizing the external field or for short-period core-field estimations. This may be a valuable
survey to grant usability of the data set for further scientific purposes.
The Earth's magnetic field changes continuously both spatially and temporally. A measurement of the magnetic field at or above the Earth's surface is the summation of numerous different sources, each with a different spatial and temporal behaviour. On short time scales of seconds to months, the changes are driven primarily by the interaction of the ionosphere and magnetosphere with the solar wind. Seasonal changes are also influenced by the variation of the tilt of the magnetic field with the respect to the ecliptic plane. On longer timescales of years to centuries, changes of the core field (known as secular variation, SV) alter the morphology of the observed field at the surface.
With the plethora of Swarm satellite data, it is now possible to examine field sources in detail on a global basis. However, in contrast to a ground observatory where time series can be produced at a fixed location allowing the time change of the field to be deduced precisely, the orbital velocity of a satellite (at ~8km/s at 500 km altitude) makes source separation more difficult as measurements are a combination of both spatial and temporal variations of the field. The solution to this is often achieved by using a small subset of the data and modelling the expected geophysical extent of each source in space or time, or both. For example, main field models provide a large spatial scale representation with a smoothed time dependence typically fitted to six-monthly splines. However, such modelling approaches do not capture the more rapid variations of the core field, making it more difficult to robustly detect features such as geomagnetic jerks in satellite data compared to ground observatory data. Such rapid processes are believed to hold vital new information regarding the behaviour of the outer core.
Geomagnetic Virtual Observatories (GVOs) are a method for processing magnetic satellite data in order to simulate the observed behaviour of the geomagnetic field at a static location. As low-Earth orbit satellites move very quickly but have an infrequent re-visit time to the same location, a trade off must be made between spatial and temporal limits, typically between one month and four months with a radius of influence of 700 km chosen for the Swarm mission.
We build a global network of geomagnetic main field time series derived from magnetic field measurements collected by satellites, with GVOs placed at 300 approximately equally spaced locations, at the mean satellite altitude. GVO time series are derived by fitting local Cartesian potential field models to along-track and east-west sums and differences of data collected within a radius of 700 km of each grid point, over a given time period. For the Swarm mission, two Level 2 data products are now available: (a) time series of `Observed Field' GVOs, where all observed sources contribute to the estimated values, without any data selection or correction, and (b) time series of `Core Field' GVOs, where additional data selection and external field model corrections are applied.
These products are derived at one- and four-monthly sampling. We focus on the de-noising that is carried out on the one-monthly data set, the aim being to reduce the contamination due to magnetospheric and ionospheric signals, and local time (LT) sampling biases. It has been found that the secular variation of residuals of GVO time series data at a single location will be strongly correlated with its neighbours due to the influence of large-scale external sources and the effect of local time precession of the satellite. Using Principal Component Analysis (PCA) we can remove signals related to these noise sources to better resolve internal field variations on short timescales. This reduces the negative effects of using a time bin shorter than the local time precession rate of the orbit in terms of LT bias, improving the temporal and spatial resolution of more rapid SV. The PCA also allows the use of more data to build each GVO sample, accounting for external signals without the need for stringent data selection, a useful feature as there is a minimum number of data needed to stably resolve a local cubic potential in a given spatial and temporal GVO bin size. We describe the process developed as part of the ESA Swarm Level 2 GVO product, and also the application of this method to GVO series derived from observations of the Oersted, CHAMP and CryoSat-2 missions.
This method can be used on other magnetic missions or those with ESA platform magnetometers. Our denoised GVO data set covers November 2013 to 2021 for Swarm, and has been extended for Ørsted 1999 to 2005, for CHAMP 2000 to 2010, and for CryoSat-2 2010 to 2018.
In addition, the methodology can be used to model the improvements possible using additional satellite missions such as NanoMagSat. The availability of data from a wider range of local times along with more rapid repeat periods allows denser grids of GVO and higher cadences, for example, reducing from 4.2 months to three weeks. This would allow very rapid core signals to be identified in a more robust manner, broadening the extent to which we can probe the outer core while relying on Swarm as a backbone that ensures absolute accuracy over time.
Launched on 22 November 2013 by the European Space Agency (ESA), the three Swarm satellites were initially designed with their original configuration to monitor and understand the geomagnetic field and the state of the ionosphere and magnetosphere. In 2017, for the first time, some pre- and post-earthquake magnetic field anomalies as recorded by Swarm satellites were revealed on occasion of the 2015 Nepal M7.8 earthquake. Interestingly, the cumulative number of satellite anomalies behaved as the cumulative number of earthquakes, with the so-called S-shape, providing a heuristic proof on the lithospheric origin of the satellite anomalies (De Santis et al., 2017; https://doi.org/10.1016/j.epsl.2016.12.037). Following the same approach, other promising results were obtained for 12 case studies in a range of earthquake magnitude 6.1-8.3, investigated with the support of ESA to INGV (and Planetek) funding the SAFE (SwArm For Earthquake study) project (De Santis et al., 2019a; https://doi.org/10.3390/atmos10070371). In 2019, almost five years of Swarm magnetic field and electron density data were analysed with a Superposed Epoch and Space approach and correlated with major worldwide M5.5+ earthquakes (De Santis et al. 2019b; https://doi.org/10.1038/s41598-019-56599-1). The analysis confirmed the correlation between satellite anomalies and earthquakes above any reasonable doubt, by means of a statistical comparison with random simulations of anomalies. It also confirmed the Rikitake (1987) law, initially proposed for ground data: the larger the magnitude of the impending earthquake, the longer the precursory time of anomaly appearance in ionosphere from satellite. Furthermore, we demonstrated in several case studies (e.g. Akhoondzadeh et al. 2019; https://doi.org/10.1016/j.asr.2019.03.020; De Santis et al. 2020; https://doi.org/10.3389/feart.2020.540398) that the integration of Swarm data with other kinds of measurements from ground, atmosphere and space (e.g. CSES data) reveals a chain of processes before mainshocks of many seismic sequences. A review of the above results together with some new ones will be presented.
We present new results on the extraction of magnetic signals due to several tidal constituents obtained by analyzing the most recent Swarm data in combination with data from past satellite missions. As we obtain more magnetically quiet data and as better models of the core, crust and magnetospheric field components become available, improvements in resolution and in the signal-to-noise ratio are anticipated for tidal magnetic signals, enhancing the sensitivity to the electrical conductivity of oceanic upper mantle. We show that the extraction of the weaker signals becomes feasible by utilizing longer time series and by including field gradients, which help filter out small scale noise. We also evaluate added value of CryoSat-2 and GRACE-FO platform magnetometer data.
A model-backfeed scheme to optimize InSAR deformation time series estimation
Bin Zhang, Ling Chang, Alfred Stein
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Hengelosestraat 99, 7514AE Enschede, The Netherlands
InSAR deformation time series estimation is highly dependent on the outcome of the spatio-temporal phase unwrapping and the correctness of the pre-defined deformation time series model. When acknowledging temporal smoothness, a linear function of time can be assumed to hold for deformation time series modeling, which can facilitate phase unwrapping. This assumption is suited for the Constantly Coherent Scatterers (CCS) that have a strictly linear behavior over time. Using such a simple linear model, however, we may over- or under- estimate deformation parameters, such as the CCS deformation velocity that shows nonlinear behavior. To address this issue, we designed a new scheme that optimizes deformation time series estimation. It iteratively re-introduces the best deformation models of every CCS, as determined by Multiple Hypothesis Testing (MHT), into phase unwrapping. It includes both linear and nonlinear canonical functions. We name our new scheme a model-backfeed (MBF) scheme.
The MBF starts with post InSAR deformation time series modeling. The InSAR deformation time series is generated using a standard time series InSAR method, such as Persistent Scattering Interferometry (PSI). Once a number of potential nonlinear canonical functions was built as an extension to the linear function, we applied MHT to determine the best deformation model. The variance-covariance matrix of deformation estimators was obtained at every CCS. Next, we iteratively replaced the simple linear model with this model during phase unwrapping, and estimated the deformation parameters.
We illustrated our method with a study on surface subsidence of the Groningen gas field in the Netherlands between 1995 and 2020, using 32 ERS-1/2, 68 Envisat, 82 Radarsat-2, and 13 ALOS-2 images. The results show that the cumulative maximum surface subsidence has been up to 25 cm over the past 25 years in response to local oil/gas extraction activities [1]. They also show nonlinear behavior of some CCS. Taking two quality indicators, we showed that the values of the ensemble coherence increased by 10 – 33% and the values of the spatio-temporal consistency for MBF decreased by 2%-20% as compared with a standard InSAR time series analysis.
We conclude that the model-backfeed scheme can mitigate phase unwrapping errors. It can also obtain better phase unwrapping parameters than the standard InSAR time series method.
[1] Zhang, B., Chang, L., & Stein, A. (2021). A model-backfeed deformation estimation method for revealing 25-year surface dynamics of the Groningen gas field using multi-platform SAR imagery. (Under review).
Earthquake risk is a global-scale phenomenon that exposes human life to danger and can cause significant damages to the urban environment increasing globally more or less in direct proportion to exposure in terms of the population and the human-built environment. Earth observation (EO) science, plays an important role in operational damage management, showing the most affected areas on a large scale and in a very short time helping the decision making processes more effectively. This can be done by combining the geodetic information with geospatial data to generate a Geospatial Intelligence (GEOINT) product which is the organization of all the available geographical information on the area of interest.
In the morning (09:17 EEST) of September 27, 2021, a strong M=5.8 with a 10 Km focal depth (35.1430 N, 25.2690 E) earthquake struck in the area of Arkalochori town, Crete ~22 Km at the Southeast of the city of Heraklion. Several aftershocks followed for the next few days with the strongest to be that of September 28, 2021 (07:48 EEST) M=5.3 with an 11 Km focal depth (35.1457 N, 25.2232 E) according to the Institute of Geodynamics of the National Observatory of Athens (http://www.gein.noa.gr/en/). The main earthquake caused extensive damages to numerous buildings including homes and schools constituting many of them unsafe to use in the impacted region. Some people were injured, one lost his life, and others became homeless.
This study that was performed operationally at the time of the earthquake in Arkalochori, Crete, aims at developing a useful Geospatial Intelligence operational tool for the impact assessment of that earthquake. This was carried out by retrieving the ground deformation information from co-seismic Differential SAR Interferometry (DInSAR) products and then combining it with infrastructure-related geospatial data. The developed tool was available the following days after the event to be used by the stakeholders (e.g. emergency responders, scientists, civil protection, etc)
For the geodetic purposes of this study were used, (i) two Sentinel-1 SAR SLC (Single-Look-Complex) IW (Interferometric Wide Swath) images in ascending (master 24/09-slave 29/09) and descending (master 18/09-slave 30/09) geometry before and after the earthquake event in order to generate co-seismic interferometric pairs, and (ii) a Digital elevation model (DEM) SRTM-3 sec (90m) of the study area. ESA Copernicus Sentinel-1 SLC satellite images are openly available within a few hours since their acquisition from the Copernicus Open Access Hub platform (URL: https://scihub.copernicus.eu/). The processing of Sentinel-1 SLC images was performed on ENVI SARscape software.
Regarding the generation of geodetic products is separated into three main steps: The first step is the pre-processing of Sentinel-1 SAR SLC images that include the orbit correction, the burst selection, and the co-registration of master and slave image in ascending and descending geometry, respectively. The second step in the main processing of the interferometric pairs in every geometry is performed by the coherence and wrapped interferogram generation, interferogram flattening using a DEM SRTM-3 sec, adapted filtering, phase unwrapping using MCF (Minimum Cost Flow), and finally the phase to displacement in Line-Of-Sight (LOS) and geocoding. DInSAR displacement map in LOS, only measures the path length difference between the earth surface and the satellite. In order to estimate the vertical (up-down) and horizontal (east-west) deformation, the third step of displacement decomposition was carried out. In this step, the ascending and descending LOS displacement products were used to recover the true movements in vertical and horizontal axes. The final products are exported in GeoTiff format for further analysis of ground deformation and damage estimation in correlation with urban fabric and infrastructures on the GIS environment.
Other datasets regarding infrastructure are vector points, polylines, and polygons derived from ready-to-use from various open sources or information for digitization. These include Airports, Hospitals – Health Centers, Schools, Cultural-Archaeological sites, Urban Fabric, Roads, Bridges, and Dams. The utilized software for the GIS processing is the commercial ESRI ArcGIS Pro 2.8 and for the development of the Geospatial Intelligence application via a Web App the ESRI ArcGIS Online and its WebApp Builder. After the import of the ground deformation products to the GIS software then the mining of that information to the already prepared vector datasets was performed with the proper tools leading to the creation of the new vector Geospatial Intelligence products. These products were then uploaded to the cloud-based ESRI ArcGIS online to create a web map needed for the operational tool. With the WebApp Builder, the app was developed and then combined with the web map.
Finally, the results of this study have shown that there was a subsidence up to -20 cm regarding the vertical (Up-Down) displacement while eastward movements up to 13 cm and westward movements up to 6 cm exist according to the horizontal (East-West) displacement. Generally, in the area around the town of Arkalochori, there is subsidence which reaches the maximum negative values. Regarding the web app tool, the integration of co-seismic deformation maps and geospatial data including the exposure datasets into a tool for post-disaster infrastructure assessment can be very useful. It contributes to the identification of the most severely impacted areas and the prioritization of in-situ inspections. The use of the proposed tool for on-site inspections in the affected area around Arkalochori showed a good match of the “red” area of the co-seismic deformation map with the location of the identified large number of extensive damages on structures. It also contributed to the effective and quick inspection of roadway networks focusing on the identified bridges in the geospatial intelligence tool. However, the inspected bridges were found in good condition and seismic damages were not developed. In conclusion, this Geospatial Intelligence web app can be used further for more analytic research, decision making, and other uses while it can also be enhanced with more datasets and special information integration.
The ESRI ArcGIS Online Web App that was developed is open and accessible from every portable device or pc in the following link via any web browser: https://learn-students.maps.arcgis.com/apps/webappviewer/index.html?id=339cd0b5020f40cb93607d4c4d519cea
Acknowledgments
We would like to thank Harris Geospatial local dealer Inforest Research o.c. for the access to ENVI SARscape as well as ESRI for the Learn ArcGIS Student Program license.
Landslides are defined as the movement of rock, debris, or earth down a slope, which may cause numerous fatalities and significant infrastructure damages (Cruden & Varnes, 1996). Therefore, it is essential to have timely, accurate, and comprehensive information on the landslide distribution, type, magnitude, and evolution (Hölbling et al., 2020). In particular, volume estimates of landslides are critical for understanding landslide characteristics and their post-failure behaviour. Pre- and post-event digital elevation model (DEM) differencing is a suitable method to estimate landslide volumes remotely. However, such analyses are restricted by limitations to existing DEM products, such as limited temporal and spatial coverage and resolution or insufficient accuracy. The free availability of Sentinel-1 synthetic aperture radar (SAR) data from the European Union's Earth Observation Programme Copernicus opened a new era for generating such multi-temporal topographic datasets, allowing regular mapping and monitoring of land surface changes. However, the applicability of DEMs generated from Sentinel-1 for landslide volume estimation has not been fully explored yet (Braun, 2021; Dabiri et al., 2020). Within the project SliDEM (Assessing the suitability of DEMs derived from Sentinel-1 for landslide volume estimation) we address this issue and pursue the following objectives: 1) to develop a semi-automated and transferable workflow for DEM generation from Sentinel-1 data, 2) to assess the suitability of the generated DEMs for landslide volume estimation, and 3) to assess and validate the quality of the DEM results in comparison to reference elevation data and to evaluate the feasibility of the proposed workflow. This workflow is implemented within a Python package for easier reproducibility and transferability. We use the framework described by Braun (2020) for DEM generation from Sentinel-1 data, including: (1) querying for suitable Sentinel-1 image pairs based on the perpendicular baseline; (2) creating the interferogram using phase information of each Sentinel-1 SAR image pair; (3) phase filtering and removing the phase ambiguity by unwrapping the phase information using the SNAPHU toolbox; (4) converting the unwrapped phase values into height/elevation information; and (5) performing terrain correction to minimize the effect of topographic variations. The accuracy of the generated DEMs is assessed using very high-resolution reference DEMs and field reference data, collected for major landslides in Austria and Norway which serve as test sites. We use statistical measures such as the root mean square error (RMSE) to assess the vertical accuracy and autocorrelation Moran's-I index for quality assessment of the generated DEMs. The importance of the perpendicular baseline and temporal intervals on the quality of the generated DEMs is demonstrated. Moreover, we assess the influence of topography and environmental conditions on the quality of the generated DEMs. The results of this research will reveal the potential but also the challenges and limitations of DEM generation from Sentinel-1 data, and their applicability for geomorphological applications such as landslide volume estimation.
References:
-Braun, A. (2020). DEM generation with Sentinel-1 Workflow and challenges. European Space Agency. http://step.esa.int/docs/tutorials/S1TBX DEM generation with Sentinel-1 IW Tutorial.pdf
Braun, A. (2021). Retrieval of digital elevation models from Sentinel-1 radar data–open applications, techniques, and limitations. Open Geosciences, 13(1), 532–569.
-Cruden, D. M., & Varnes, D. J. (1996). Landslide types and processes. In A. K. Turner & R. L. Schuster (Eds.), Landslides: Investigation and Mitigation. Transportation Research Board Special Report 247. National Research Council.
-Dabiri, Z., Hölbling, D., Abad, L., Helgason, J. K., Sæmundsson, Þ., & Tiede, D. (2020). Assessment of Landslide-Induced Geomorphological Changes in Hítardalur Valley, Iceland, Using Sentinel-1 and Sentinel-2 Data. Applied Sciences, 10(17), 5848. https://doi.org/10.3390/app10175848
-Hölbling, D., Abad, L., Dabiri, Z., Prasicek, G., Tsai, T., & Argentin, A.-L. (2020). Mapping and Analyzing the Evolution of the Butangbunasi Landslide Using Landsat Time Series with Respect to Heavy Rainfall Events during Typhoons. Applied Sciences, 10(2), 630. https://doi.org/10.3390/app10020630
Along with fluvial floods (FFs), surface water floods (SWFs) caused by extreme overland flow are one of the main flood hazard occurring following heavy rainfall. Using physics-based distributed hydrological models, surface runoff can be simulated from precipitation inputs to investigate regions prone to soil erosion, mudflows or landslides. Geomatics approaches have also been developed to map susceptibility towards intense surface runoff without explicit hydrological modeling or event-based rainfall forcing. However, in order for these methods to be applicable for prevention purposes, they need to be comprehensively evaluated using proxy data of runoff-related impacts following a given event. Here, the IRIP geomatics mapping model, or “Indicator of Intense Pluvial Runoff”, is faced with rainfall radar measurements and damage maps derived from satellite imagery (Sentinel) and classification algorithms in rural areas. Six watersheds in the Aude and Alpes-Maritimes departments in the South of France were investigated during two extreme storms. The results of this study showed that the higher the IRIP susceptibility scores, the more likely SWFs were detected in plots by the EO-based detection algorithm. Proportion of damaged plots was found to be even greater when considering areas which experienced larger precipitation intensities. Land use and soil hydraulic conductivity were found to be the most relevant indicators for IRIP to define production areas responsible for downslope deteriorations. Multivariate logistic regression was also used to determine the relative weights of upstream and local topography, uphill production areas and rainfall intensity in explaining intense surface runoff occurrence. Modifications in IRIP's core framework were thus suggested to better represent SWF-prone areas. This work overall confirms the relevance of IRIP methodology and suggests improvements to implement better prevention strategies against flood-related hazards.
Satellite-based monitoring of active volcanoes provides crucial information about volcanic hazards and therefore is an essential component for the assessment of risk and disaster management. For this, optical imagery plays a critical role in the monitoring process. However, due to the spectral similarities of volcanic deposits and the surrounding background, the detection of lava flows and other volcanic hazards especially in unvegetated areas is a difficult task to manage with optical Earth observation data. In this study, we provide an object-oriented change detection method based on very high-resolution (VHR) PlanetScope imagery (3 m), short-wave infrared (SWIR) data from Sentinel-2 & Landsat-8 and digital elevation models (DEM) to map lava flows of selected eruption phases at tropical volcanoes in Indonesia (Karangetang 2018/2019, Krakatau 2018). Our approach can map lava flows in vegetated and in unvegetated areas. Procedures for mapping loss of vegetation (due to volcanic deposits) are combined with analysis of thermal anomalies derived from Sentinel-2/Landsat-8 SWIR imagery. Hydrological runoff modelling based on topographic data provides information about potential lava flow channels and areas. Then, within the potential lava flow area changes in texture and brightness between pre- and post-event PlanetScope imagery are analyzed to map the final lava flow area (also upstream in areas that have already been unvegetated prior to the lava flow event). The derived lava flow areas were qualitatively validated with multispectral false color time series from Sentinel-2 & Landsat-8. In addition, reports of the Global Volcanism Program (GVP) were analyzed for each eruption event and compared with the derived lava flow areas. The results show a high agreement of the derived lava flow areas with the visible thermal anomalies in the false color time series. Also, the analyzed GVP reports support the findings. Accordingly, the high geometric (3 m) and temporal resolution (daily coverage of the entire Earth’s landmass) of the PlanetScope constellation provides valuable information for the monitoring process of volcanic hazards. Especially the combination of the VHR PlanetScope imagery and the developed change detection methodology to map lava flow areas, provides a beneficially tool for the rapid damage mapping. In future, we plan to further automate this method in order to enable monitoring of active volcanoes in near-real-time.
The last eruption in the Fogo Volcano (Archipelago of Cabo Verde, Africa), which began in November 2014, was the first eruptive event captured by the Sentinel-1 mission. The GRD data from the Sentinel-1 mission was used in this study to identify the progress of the lava flow and measure the affected area, in order to assess its potential to monitor and assess eruptive scenarios in near-real-time, which is fundamental to mitigate risks and to better support crisis management. The present work sought to complement previous research and explore the potential of utilizing data from the Synthetic Aperture Radar (SAR) Sentinel-1 mission to better monitor active volcanic areas. Sentinel-1 Ground Range Detected (GRD) data was used to analyze the changes that occurred in the area before, during, and after the eruptive event and was able to identify the progress of the lava flow and measure the affected area (3.89 km² in total). After processing the GRD data using the standard SNAP workflow, the raster calculation tool of Arcmap 10.4 GIS software was used to compute an Image Differencing Change Detection. In this procedure, each image after the start of the event is subtracted from a pre-event image. For this purpose, the value of an image referring to the last hours of the eruption was subtracted from an image prior to the beginning of the event. Very high (“change”) and very low (“no change”) values were thresholded in order to obtain the change detection map. To assess the accuracy and validate each change detection procedure, the Overall Accuracy was computed with independent validation datasets with 50 change/no change sampling points. The successive change detection procedures showed Overall Accuracies ranging between 0.70 and 0.90. The identification and mapping of the affected area (3.89 km² in total) are in relative agreement with other authors' results when applying different techniques to different SAR datasets, including high-resolution commercial data (from 4.53 to 5.42 km2). Nevertheless, in the attached figure, it is possible to note that some of the areas previously observed as affected by the 2014/15 lava flow were not identified in the change detection procedures with GRD data. It might be explained by the fact that there were no substantial roughness changes in the overlap area of 2014/15 lava flow with that of 1995 which occurred at the "Chã das Caldeiras" place. Monitoring surface changes during eruptive events using Sentinel-1 GRD data proved cost-effective in terms of data processing and analysis, with lower computational cost, and results consistent and coherent with those previously obtained with Sentinel-1 SLC data or other types of SAR data. Therefore, this approach is pertinent and suitable for research but is especially valuable to integrate low-cost monitoring systems of active volcanic areas in near-real-time. The systematic use of GRD products can thus serve as the basis for event monitoring that confers greater agility in computation and analysis time for decision support.
Measurements of deformation and deposit characteristics are critical for monitoring, and therefore forecasting the progression of volcanic eruptions. However, in situ measurements at frequently-erupting, dangerous volcanoes such as Sinabung, Indonesia, can be limited. It is, therefore, important to exploit the full potential of all available satellite imagery. Here, we present preliminary results of a multi-sensor radar study of displacements and surface change at Sinabung volcano between 2007 to 2021.
Sinabung's first historically documented eruption occurred in August 2010, lasting 11 days, and was defined by explosive activity. Although several studies reported similar pre- and post-eruptive deformation around the summit area [1, 2, 3], interpretations vary across a range of deformation source depths and mechanisms. Three years later on September 15, 2013, a new eruption started, which is still ongoing (with two pauses in eruptive activity). The activity transitioned over the years showing various styles from primarily ash explosions to lava flow emplacement, dome growth and pyroclastic density currents, clearly identifiable in radar backscatter.
We will present both an analysis of historical and current displacements at Sinabung, and new backscatter observations of the progression of the current eruption. We use three different radar wavelengths, L-band (ALOS2 and ALOS1), C-band (Sentinel-1) and X-band (TerraSAR-X, COSMO-SkyMed) to span as much of the eruptions with as dense a time series as possible. We refine our observations of displacement using time series analysis and atmospheric correction of interferograms and aim to make estimations of effusion rate from backscatter data.
Our preliminary results show subsidence (2015-2021) at the lava flow on the southeast flank of the volcano, deposited throughout 2014, and attribute this to contraction and compaction. However, we do not find evidence for deformation due to magma movement over this time.
[1] Chaussard, Estelle and Falk Amelung. 2012. “Precursory inflation of shallow magma reservoirs at west
Sunda volcanoes detected by InSAR.” Geophysical Research Letters 39(21).
[2] González, Pablo J, Keshav D Singh and Kristy F Tiampo. 2015. “Shallow hydrothermal pressurization
before the 2010 eruption of Mount Sinabung Volcano, Indonesia, observed by use of ALOS satellite
radar interferometry.” Pure and Applied Geophysics 172(11):3229–3245.
[3] Lee, Chang-Wook, Zhong Lu, Jin-Woo Kim and Seul-Ki Lee. 2015. Volcanic activity analysis of Mt.
Sinabung in Indonesia using InSAR and GIS techniques. In 2015 IEEE International Geoscience and
Remote Sensing Symposium (IGARSS). IEEE pp. 4793–4796.
Landslides triggered by intense and prolonged rainfalls occur worldwide and cause extensive and severe damages to structures and infrastructures and loss of life. Obtaining even coarse information on the location of triggered landslides during or immediately after an event can increase the efficiency and efficacy of emergency response activities, possibly reducing the number of victims. In most cases, however, in the immediate aftermath of a meteorological triggering event, optical post-event images are unusable due to cloud cover. The increasing availability of images acquired by satellite Synthetic Aperture Radar (SAR) sensors overcome this limitation, because microwaves do not interact with water vapour. In the literature it has been shown that C-band Sentinel-1 SAR amplitude images allow the detection of known event landslides in different environmental conditions. In this work we explore the use of such images to map event landslides.
SAR backscatter products are generally represented by a grey tone matrix of backscatter values mainly influenced by (i) the projected local incidence angle, (ii) surface roughness, and (iii) the dielectric constant, used as a proxy for soil moisture. Similarly to optical images, landslides modify the local tone, texture, pattern, mottling and grain of the grey tone matrix. Therefore we refer to a “radar backscatter signature” of event landslides as the combination of these three main components which can reveal the occurrence of a landslide in radar amplitude products. Interpreters use such features to infer the occurrence of event landslides (landslide detection), and to delineate landslide borders (landslide mapping), similarly to what is done for optical post-event images. In this study, four expert photo-interpreters have defined interpretation criteria of SAR amplitude (i) post-event images of the backscatter coefficient (i.e. β₀, the radar brightness coefficient) and of the (ii) derived images of change computed as the natural logarithm of the ratio between the post- and pre-event images (i.e., ln(β₀post/β₀pre)). Interpretation criteria build on the well-established ones usually applied to optical images. Different criteria were defined to interpret images of change, where clusters of pixels of changes pop out from the salt and pepper matrix (i.e. anomalies). Such changes can be caused by several different phenomena, including slope failures, snowmelt, rainfall, vegetation cuts, among others. Interpreters identify areas where the change has not been random, and decide whether the cluster is a landslide based on the shape of the cluster. The risk of incurring in morphological convergences (i.e. ambiguities in the interpretation) is higher if change images are examined alone. Often, use of ancillary data such as Digital Elevation Models can help exclude possible erroneous interpretations.
The same team of image interpreters mapped two large event landslides. The first is a rock slide - debris flow - mudflow occurred in Villa Santa Lucia, Los Lagos Region, Chile on 16 December 2017. The second is a rock slide occurred in early August 2015 in the Tonzang region, Chin Division, Myanmar. The landslide maps were prepared on a total of 72 images for the Chile test case and 54 for the Myanmar test case. Images included VV (vertical transmit, vertical receive) and VH (vertical transmit, horizontal receive) polarisation, ascending and descending acquisition geometries, multilook processing, adaptive and moving window filters, post-event images and images of change. For the Chile test case, interpreters mapped the event landslide on an optical post-event image before mapping on SAR images, whereas in Myanmar it was done in the end. Maps obtained from SAR aplitude derived products were quantitatively compared to the maps prepared on post-event optical images, assumed as benchmark, by using a geometrical matching index. Despite the overall good agreement between the SAR- and optical-derived landslide maps, locally, errors can be due to geometrical distortions, and speckling-like effects. In this experiment, polarisation played an important role, while filtering was less decisive. Results of this study proved that Sentinel-1 C-band SAR amplitude derived products can be exploited for preparing accurate maps of large event landslides, and that they should be further tested to prepare event inventories. Other SAR bands and resolutions should be tested in different environmental conditions and for different types and sizes of landslides. Application of rigorous and reproducible interpretation criteria to a wide library of test cases will strengthen the capability of expert image interpreters of using such images to produce accurate landslide maps in the immediate aftermath of triggered landslide events worldwide or even train automatic classification systems.
On 20 December 2020, after about two years of quiescence, a new eruption started at Kīlauea volcano (Hawaiʻi, USA) by three fissures opening on the inner walls of Halema`uma`u Crater. During the eruption, which produced lava fountains up to 50 m height, the lava cascaded into the summit water lake, generating a vigorous steam plume and forming a new lava lake at the base of the crater. In this study, we investigate the Kīlauea’s lava lake through the Normalized Hot Spot Indices (NHI) tool. The latter is a Google Earth Engine (GEE) App, which exploits mid-high spatial resolution daytime satellite data, from the Operational Land Imager (OLI) onboard of Landsat-8 and the Multispectral Instrument (MSI) onboard of Sentinel-2 to map thermal anomalies at global scale by satellite. In addition, offline processing of Landsat-8 nighttime data was performed. Results show that especially at daytime the NHI tool provided detailed information about the lava lake and relative space-time variations. Moreover, the hot spot area well approximated the area covered by the lava lake from U.S. Geological Survey (USGS) measurements when only the hottest NHI pixels were considered. By correcting Sentinel-2 MSI and Landsat-8 OLI daytime data for the influence of the solar irradiation, we estimated values of the radiant flux in the range 1-5 GW from hottest pixels during the period December 2020 to February 2021. Those values were about 1.7 times higher than Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) estimations, while the temporal trend of the radiant flux was comparable. Analysis of Landsat-8 OLI nighttime data showed a similar temporal trend of the radiant flux as observations from MODIS and VIIRS, but with a higher deviation compared to the daytime data. This study demonstrates that the NHI tool may provide a relevant contribution to investigate volcanic thermal anomalies also in well-monitored areas such as Kilauea, opening some challenging scenarios about their quantitative characterization also through its automated module performing operationally.
The Copernicus EMS-FLEX service was activated by a request from the authorized user and a local user from the Philippines. Several sources suggest that the Manila NCR and lower Pampanga river basin in the Philippines have been affected by ground subsidence phenomena impacting settlements in the Manila agglomeration, increasing riverine and coastal flood risk. The EMS service provided evidence of ground motion patterns in the targeted areas using multi-temporal satellite interferometry and persistent scatterers technique. Tailored products derived from time series of multi-pass Sentinel-1 imagery provide insight into localization and extent of sinking zones and quantify the severity of phenomena related to estimated motion velocity or additional adversary patterns.
It is generally assumed that the subsidence in the area is strongly related to underground water extraction, which was increasing rapidly during the last decades. But, as measures to mitigate the subsidence were already taken, the highest concern was required in getting the information about the dynamics of the subsidence trend, i.e. whether it is slowing or accelerating. PSI processing over 6 years-stack generated a high number of unwrapping errors on persistent scatterers with non-linear motions. Errors had to be corrected before the estimation of the motion trend dynamics. In addition, temporally coherent targets were detected to avoid losing information by decorrelation over limited periods from a long time series of interferometric measurements. Intervals showing high levels of noise had been detected and were not considered in the process of estimation of the motion trend dynamics.
Apart from ground motion rates and displacements in the line-of-sight, vertical and east-west horizontal motion fields were estimated using directional decomposition. Initially, only limited horizontal movements were expected in the area of interest as groundwater extraction is typically followed by vertical subsidence. However, the area was hit by at least one earthquake during the observation period, in addition, there might be long-term residual horizontal tectonic motions along the slips. Abrupt non-linear motion resulting from the earthquake was probably partially eliminated from the resulting time series by the atmospheric phase screen. Though, patterns of non-vertical motions were detected and presented in the results.
The service outputs are utilized by local research teams to evaluate the extent of subsidence phenomena, their severity, and potential impacts on existing settlements and planned projects (land reclamation). Results shall provide information baseline into the research of potential subsidence driving factors such as the correlation between groundwater extraction and subsidence rates and spatial-temporal patterns.
InSAR (Interferometric Synthetic Aperture Radar) is widely acknowledged as one of the most powerful remote sensing tools for measuring ground displacements over large areas. The most common multi-temporal technique is the Persistent Scatterer interferometry (PS-InSAR), which permits the retrieving of spatial and temporal deformations in landslide-prone slopes. Since PS-InSAR is based on the analysis of targets with strong stability over time in terms of reflectivity, anthropic areas affected by landslides are designed as optimal test sites for assessing displacements with millimetric precision.
Depending on the landslide kinematics and style of activity, however, differential surface movements within the landslide area or different displacement trends in the analysed time interval, may not be highlighted without further processing.
To better discriminate the internal segmentation of complex landslides or differential movements of landslide systems, a specific toolbox for the post-processing of interferometric data is proposed. The PStoolbox was developed by NHAZCA S.r.l. as a standalone software and, for the simplicity of usage, a set of plugins were designed for the open-source software QGIS with the main purpose of guiding the user in interpreting ground deformation processes. These characteristics make the PStoolbox an effective Software not only for end-users who need to understand and inspect what kind of information the interferometric data contains, but also for technicians to evaluate the results of interferometric analyses and to better understand, validate and take full advantage of the data.
To date, the toolbox includes several modules that allow to: 1) highlight the variations over the displacement trend over time (Trend Change Detection tool – TDC); 2) plot the time series of displacement for one or more selected measure points (PS time series tool); 3) calculate the smoothed data over the displacement values (Filtering tool); 4) plot the velocity or displacement values along a linear section containing the selected measure points (Interferometric section tool); 5) calculate the vectorial decomposition of the persistent scatters, starting from the data results in ascending and descending orbital geometries (PS decomp tool).
InSAR post-processed data permit the evaluation of subtle displacement patterns in terms of spatial and temporal variations, which is essential not only for the characterization of every deformation process but also for planning purposes and in a risk-mitigation perspective.
The ground motion due to landslide or other natural phenomena can cause serious damage to infrastructures and the environment, and it represents a risk for the citizen. Landslides may destroy roads, railways, pipelines, and buildings, even causing victims. In recent years, the continuous growth of the intensity and frequency of extreme natural phenomena has been observed, and there is a clear relationship between both human activities and climate change.
The satellite Earth Observation is proven to be extremely useful in the hazard mapping related to hydrogeological events. Indeed it represents a powerful tool to generate uniform information at global scales and covering a wide span of risk scenarios. It constitutes a unique source of information that can help monitor and link hazards, exposure, vulnerability modifiers, and risk.
Planetek Italia has been involved in different projects related to the exploitation of EO-derived information to support the hydrogeological hazard mapping, due to natural disasters like landslides, subsidence, earthquake and tsunami. One of these projects is the Disaster Risk Reduction (DRR), an ESA project within the Earth Observation for Sustainable Development (EO4SD) program.
Among the activities of the project, Planetek Italia delivered ground motion maps based on the Rheticus Displacement service, that implements the PSI technique allowing the exploitation of the result in a user-friendly environment through a web interface. The platform has several tools for better highlighting the persistent scatterers affected by motion.
Due the complexity of the hazard phenomena and of the transformation of the ground motion information into risk mitigation operations by the end-users, in Planetek – thanks to the interactions with different end users – it has grown the idea that the users needed a more simplified tool; a tool able to support the hazard mapping over areas of the territory, and not only punctual information provided by the persistent scatterers.
To this aim, Planetek Italia developed Rheticus® Safeland, a vertical geoinformation service, to answer the needs of the local authorities in charge of the geohazards managements. The service uses advanced technological solutions for monitoring and predicting ground motion phenomena through the integration of the satellite ground motion mapping and local data related to the environment and infrastructures.
Rheticus Safeland provides a unique source of actionable information that can help monitor and link hazards, exposure, vulnerability modifiers, and risk. In this way, the service supports the local authorities in charge of risk management both to protect citizens from danger and to prevent increased costs and delays to new developments.
The Rheticus Safeland service, through automated procedures, provides a normalized level of concern (0:1) to each portion of the territory based on the trends of surface displacement estimated through the PSI and further parameters that take into account the orography, the coverage vegetation and the presence of infrastructures and buildings.
The monitored area is divided into hexagonal cells, classified and thematized in 3 classes with 3 different colors corresponding to increasing levels of concern: green, yellow, and red (Figure 1)
In addition to the estimated priority level for each cell, the Rheticus® Safeland service provides a level of concern on all buildings, roads and railways present within the monitored area of interest as shown in the Figure 2.
Rheticus® Safeland is able to automatically identify areas affected by slow landslides, prioritizing them according to the magnitude of the motion together with the ancillary parameters connected with the phenomena like slopes, landcover, flooding risk…
Planetek proposed the utilization of the Rheticus® Safeland within the ESA-GDA DRR project, in order to provide detailed levels of information for automatic hazard mapping to engineers, planners, and other users. The complete picture provided by the Rheticus Safeland service will provide planners with the vital knowledge they need to prioritize the implementation of risk mitigation measures, to make better decisions, and proactively avoid critical issues that arise when in-progress phenomena are not fully understood.
Seismicity in Algeria is concentrated along the coastal region in a 150 km wide strip where 90% of economic facilities and population centers are located. It is important to note that, due to the vulnerability of the building stock, moderate or strong earthquakes often have disastrous consequences. The city of Oran being one of the important cities of the country presents an example where the earthquake risk poses a constant threat to human life and property. Indeed, several powerful and destructive earthquakes have occurred in the past in this region, causing several hundred deaths and enormous economic losses. For example, over the past two decades, the following moderate to severe disastrous earthquakes have occurred: Mascara 1994 (Mw = 5.6), the Ain Témouchent earthquakes 1999 (Mw = 5.6) , Oran 2008/01/09 (Mw = 4.6) and Oran 2008/06/06 (Mw = 5.5).
We present first complete multi-temporal InSAR analysis of Northern-Algeria territory exploiting the CNR-IREA P-SBAS processing service offered by Geohazards Exploitation Platform (GEP). To cover an area of 340000 km2 we took advantage of the availability of the free data of Sentinel-1 mission. The two satellites A&B provided a one month spatio-temporal coverage in the interferometric wide swath (IWS) SAR imaging mode, between the latitudes 32°N to 37°N and longitudes 1.5°W to 8°E.
The data comprise five ascending Track (1, 30, 59, 103, 132) and six descending (08, 37, 66, 110, 139, 168), at the rate of one image per month. Time series of three frames from each Track were generated to cover the desired area that is estimated to 12% of the entire country.
The interferometric processing was performed using the default parameters of the GEP P-SBAS service that does not include a tropospheric correction. In the sake of insuring accurate results, a post-processing step was added to export the time-series results generated in the cloud to SpaMPS format. Once the export was performed, the times displacement ware corrected from the tropospheric effect using GACOS data provided by COMET Laboratory (University of Leeds).
Acknowledgements
This research was funded by the European Space Agency, through the program ESA_NoR (project ID 19086d), Copernicus Sentinel-1 data are provided by European Space Agency (ESA). All the interferometric processing’s were performed on the GEP platform using the CNR-IREA P-SBAS processing service.
In 2018 the government of Bangladesh started planning the relocation of refugees of the Rohingya minority which were fleeing violent persecution in Myanmar. An island in the Bay of Bengal (Bhasan Char) was selected which is located around 60 kilometers from the mainland and was not inhabited before. On this island, the construction of 1.500 buildings is planned to host 100.000 persons.
Currently, the island has a size of around 40 square kilometers is considered a comparably recent landform which developed from silts washed from the Himalayans since around 2010. Human rights organizations strongly criticize the plans because they do not consider the island a safe place in case of tidal waves, monsoonal rains and sea-level rise. Currently, the island hosts around 13.000 people.
To assess the risk of the relocated persons, information on the topography is required. However, globally available digital elevation models, such as SRTM, AW3D30, or the Copernicus DEM do not contain usable data in this area because it was masked as sea surface.
In this study, the potential of synthetic aperture radar interferometry (InSAR) base on the Sentinel-1 mission to create a digital elevation model the island is evaluated. While the standard acquisition mode of Sentinel-1 is the interferometric wide swath (IW) mode, collecting images in with the Terrain Observation with Progressive Scans SAR (TOPSAR) technique at a spatial resolution of 5 x 20 meters, Stripmap (SM) products were available in this area at a spatial resolution of 2.2 x 3.5 meters. These allowed to calculate interferograms for the precise delineation of topographic variations. Different image pairs were used and analyzed according to their temporal and perpendicular baselines.
Because of the largely natural surfaces and wet conditions on the island, phase decorrelation led to partially unusable results. However, these could be mitigated by phase filtering and systematic masking to generate a DEM with sufficient quality. An independent accuracy assessment was undertaken based on height measurements from the ICESat-2 mission which covered the island with several tracks. A height accuracy of 75% was achieved before post-processing. Several post processing techniques are still under development and expected to increase the DEM quality to 90 %.
The digital elevation model can serve as an input to risk assessments related to tidal waves and sea level rise to test if the current adaptation measures (embankments, height of the buildings above ground) are substantially protecting the people living on Bhasan Char.
Hanoi Province is located in the northern part of Vietnam, within the Red River delta plain, the city sits on unconsolidated Quaternary sediments of fluvial and marine origin which are 50-90m thick, these in turn rest on older Neogene deposits. Hanoi city is the capital and second largest city of Vietnam with 7.4 million inhabitants, however the population is projected to reach 9–9.2 million by 2030 and approximately 10.8 million by 2050 (Kubota et al., 2017). A recent study on land cover changes in Hanoi highlighted that between 1975 and 2020 artificial surfaces have increased by 15.5% while forests have decreased by 26.7%. As a result of this rapid urbanisation causing massive pressures on resources and the environment, the government of Vietnam officially presented the Hanoi Master Plan 2030 in July 2011. The target of the master plan is to develop Hanoi as a sustainable and resilient city and as such identified sites for urban expansion in satellite cities outside of the current city limits.
Groundwater extraction in Hanoi has long been recognised as the principal water source for the city and the negative effects of its rapid urban growth on the groundwater system have been identified early (Trafford et al, 1996). In more recent years, several studies of ground motion using Interferometric Synthetic Aperture Radar (InSAR) have measured rates of subsidence in Hanoi and, via the use of successive satellite sensors have documented the evolution of the subsiding areas. These studies have mainly attributed the high rates of subsidence to the increased extraction of groundwater.
In this study we use Sentinel 1 InSAR data for the last six years to examine subsidence patterns and link them to urban development. We find that although groundwater extraction undoubtedly plays a significant role, there is a clear spatial and temporal link to new development for all the observed subsiding areas close to Hanoi city itself. The use of historical optical satellite imagery allows the evolution of the development to be linked to the ground motion time series. We observe a correlation between the subsidence and the reclamation of agricultural land, often rice fields, for building via the dumping of aggregate to create dry, raised areas on which to build. We illustrate our findings with examples where developed areas are co-incident with areas of subsidence, we show the relationships between the stages of the ground loading and the rate of the resulting subsidence. Ultimately, we extract rates of motion for each year following ground loading. This has been completed for a sufficient number of locations allowing the construction of curves to define how the subsidence rate declines as the consolidation process occurs. This relationship therefore enables an understanding of subsidence rate with time which has clear applications in the planning of future developments on thick superficial geological deposits.
One of the main objectives of the GeoSES* project to monitor dangerous natural and anthropogenic geo-processes, using space geodetic technologies and concentrating on the Hungary-Slovakia-Romania-Ukraine cross-border region. The prevention and monitoring of natural hazards and emergency situations (e.g. landslides, sinkholes or river erosion) are also additional objectives of the project. According this, integration advanced remote sensing techniques in a coordinated and innovative way leads to improve our understanding of land deformation and its impact on the environment in the described research area. In the framework of the presented project, our study utilizes one of the fastest developing space-borne remote sensing technology, namely InSAR, which is an outstanding tool to perform large scale ground deformation observation and monitoring. Performing such monitoring task, we utilized ascending and descending Sentinel-1 Level-1 SLC acquisitions since 2014 until 2021 over the indicated cross-border region.
We also present an automated processing chain of Sentinel-1 interferometric wide mode acquisitions to generate long-term ground deformation data. The pre-processing part of the workflow includes the migration of the input data from the Alaska Satellite Facility (ASF), the integration of precise orbits from S1QC, the corresponding radiometric calibration and mosaicing of the TOPS mode data, as well as the geocoding of the geometrical reference. Subsequently all slave acquisition have be co-registered to the geometrical reference using iterative intensity matching and spectral diversity methods, as well as subsequent deramping has been also performed. To retrieve deformation time series from co-registered SLCs stacks, we have performed multi-reference Interferometric Point Target Analysis (IPTA) using singe-look and multi-look phases using the GAMMA Software. After forming differential interferometric point stacks, we performed the IPTA processing. According this both topographical and orbit-related phase component, as well as the atmospheric phase, height-dependent atmospheric phase and linear phase term supplemented with the deformation phase are modeled and refined through iterative steps. The proposed pipeline also supported by an automatic phase unwrapping error detection method, such aims to detect layers in the multi-reference stack which are significantly affected by unwrapping errors. To retrieve recent deformations of the investigated area, SVD LSQ optimization has been utilized to transform the multi-reference stack to single-reference phase time-series such could be converted to LOS displacements within the processing chain. Involving both ascending and descending LOS solutions also supports the evaluation of quasi East-West and Up-Down components of the surface deformations. The derived results are interpreted both in regional scale and through local examples of the introduced cross-border region as well, as aiming the dissemination of the InSAR monitoring results of the GeoSES project.
* Hungary-Slovakia-Romania-Ukraine (HU-SK-RO-UA) ENI Cross-border Cooperation Programme (2014-2020) “GeoSES” - Extension ofthe operational "Space Emergency System"
This work relies on a novel method developed to automatically detect areas of snow avalanche debris using a color space segmentation technique applied to Synthetic Aperture Radar (SAR) image time series of Sentinel-1. The relevance of the detection was evaluated with the help of an independent database (using high resolution SPOT image). Results of detection will be presented according to the direction of the orbit and the characteristics of the terrain (slope, altitude, orientation). The basic idea behind the detection is to identify high localised radar backscatters due the presence of snow avalanche debris compared to the surrounding snow by comparing winter images with respect to reference images. The relative importance of reference images have been studied by using well selected individual or mean summer images. The method was found successful to detect almost 66 % of the avalanche events of the SPOT database, by combining the ascending and descending orbits. Best detection results are obtained with individual reference dates chosen in autumn with 72 % of verified avalanche events using the ascending orbit. We also tested a false detection filtering using a Random Forest classification model.
The inundation extent derived from Earth Observation data is one of the key parameters in successful flood disaster management. This information can be derived with increasing frequency and quality due to a steadily growing number of operative satellite missions and advances in image analysis. In order to accurately distinguish flood inundation from “normal” hydrologic conditions, up-to-date, high-resolution information on the seasonal water cover is crucial. This information is usually neglected in disaster management, which may result in a non-reliable representation of the flood extent, mainly in regions with highly dynamic hydrological conditions. In this study, an automated approach for computing a global reference water product at 10 m spatial resolution is presented, specifically designed for the use in global flood mapping applications. The proposed methodology combines existing processing chains for flood mapping based on Copernicus Sentinel-1 and Sentinel-2 data and calculates permanent as well as monthly seasonal reference water masks over a reference time period of two years. As more detailed mapping of water bodies is possible with Sentinel-2 during clear-sky conditions, this optical sensor is used as primary source of information for the generation of the reference water product. In areas that are continuously cloud-covered complementary information from the Sentinel-1 C-Band radar sensor is used. In order to provide information about the quality of the generated reference water masks, we incorporate an additional quality layer, which gives information on the pixel-wise number of valid Sentinel-2 observations over the derived permanent and seasonal reference water bodies within the selected reference time period. Additionally, the quality layer indicates if a pixel is filled with Sentinel-1 based information in the case that no valid Sentinel-2 observation is available. The reference water product is demonstrated in five study areas in Australia, Germany, India, Mozambique, and Sudan, distributed across different climate zones. Our outcomes are systematically cross-compared with already existing external reference water products. Further, the proposed product is exemplarily applied to three real flood events. The results show that it is possible to generate a consistent reference water product that is suitable for the application in flood disaster response. The proposed multi-sensor approach is capable of producing reasonable results, even if only few or no information from optical data is available. Further, the study shows that the consideration of seasonality of water bodies, especially in regions with highly dynamic hydrological and climatic conditions, is of paramount importance as it reduces potential over-estimations of the inundation extent and gives users a more reliable picture on flood-affected areas.
The European Ground Motion Service (EGMS), funded by the European Commission as an essential element of the Copernicus Land Monitoring Service (CLMS), constitutes the first application of the interferometric SAR (InSAR) technology to high-resolution monitoring of ground deformations over an entire continent, based on full-resolution processing of all Sentinel-1 (S1) satellite acquisitions over most of Europe (Copernicus Participating States). The first release of EGMS products is scheduled for the first quarter of 2022, with annual updates to follow.
Upscaling from existing national precursor services to pan-European scale is challenging. EGMS employs the most advanced persistent scatterer (PS) and distributed scatterer (DS) InSAR processing algorithms, and adequate techniques to ensure seamless harmonization between the Sentinel-1 tracks. Moreover, within EGMS, a Global Navigation Satellite System (GNSS) high-quality 50 km grid model is realized, in order to tie the InSAR products to the geodetic reference frame ETRF2014.
The millimeter-scale precision measurements of ground motions performed by EGMS map and monitor landslides, subsidence and earthquake or volcanic phenomena all over Europe, and will enables, for example, monitoring of the stability of slopes, mining areas, buildings and infrastructures.
The new European geospatial dataset provided by EGMS will enable and hopefully stimulate the development of other products/services based on InSAR measurements for the analysis and monitoring of ground motions and stability of structures, as well as other InSAR products with higher spatial and/or temporal resolution.
To foster as wide usage as possible, EGMS foresees tools for visualization, exploration, analysis and download of the ground deformation products, as well as elements to promote best practice applications and user uptake.
This presentation will describe all the qualifying points of EGMS. Particular attention will be paid to the characteristics and the accuracy of the realized products, ensured in such a huge production by advanced algorithms and quality checks.
In addition, many examples of EGMS products will be shown to discuss the great potential and the (few) limitations of EGMS for mapping and monitoring landslides, subsidence and earthquake or volcanic phenomena, and the related stability of slopes, buildings and infrastructures.
Operational use of Sentinel 1 data and interferometric methods to detect precursors for volcanic hazard warning system: the case of La Palma volcanic complex last eruption.
Ignacio Castro-Melgar1,2, Theodoros Gatsios2,4, Janire Prudencio1,3, Jesús Ibáñez1,3 and Issaak Parcharidis2
1Department of Theoretical Physics and Cosmos, University of Granada (Spain)
2Department of Geography, Harokopio University of Athens (Greece)
3Andalusian Institute of Geophysics, University of Granada (Spain)
4Department of Geophysics and Geothermy, National and Kapodistrian University of Athens (Greece)
1. INTRODUCTION
La Palma is the youngest island of the Canary Islands (Spain) and is situated in the NW area. The Canary archipelago is a chain of seven volcanic islands in the Atlantic Ocean off the coast of Africa. This set of islands, islets and seamounts are aligned NE-SW and host a high potential risk due to their active volcanism especially in the western and youngest islands (La Palma and El Hierro). The origin of the volcanism in the Canary Archipelago started in Oligocene and continues active (Staudigel & Schmincke, 1984), the mechanism that originated its volcanism is still under debate by the scientific community. The most accepted models are the propagation fracture from the Atlas Mountains (Anguita & Hernán, 1975) or the existence of a hotspot or mantle plume (Morgan, 1983, Carracedo et al., 1998) among others models. In the last decades different volcanic manifestations occurs in the Canary archipelago such as the seismic series of Tenerife in 2004, the reactivations and eruptions of El Hierro between 2011 and 2014 and the seismic series on La Palma in 2017, 2018, 2020 and 2021.
Volcanic activity in La Palma first originated with the formation of an underwater complex of seamounts and a plutonic complex between 3 and 4 Ma [6]. Is the most volcanic active island in the Canary archipelago in historical times, 7 eruptions have been reported (1585, 1646, 1677, 1712, 1949, 1971 and 2021) The last eruption in the volcanic complex of Cumbre Vieja, currently in progress (November, 2021), is causing serious implications for the inhabitants of the island with near 3000 buildings destroyed.
2. METHODOLOGY
For this study we use Sentinel 1 A/B TOPSAR (C band), SLC product in both orbits (ascending and descending orbits). Synthetic Aperture Radar (SAR) is a powerful remote sensing satellite sensor used for Earth observation (Curlander & McDonough, 1991). The methodologies used are two, conventional Differential SAR Interferometry (DInSAR) and the MTInSAR of SBAS method. DInSAR allows measurements of land deformation very precisely and It has applications in the field of volcanology.
Long deformation dataset can be analysed using large stacks of SAR images in the same area using multitemporal differential SAR interferometry techniques. These techniques are based in the use of permanently coherent Persistent Scatterers (PSs) and/or temporally coherent Distributed Scatterers (DSs). In urban areas there are a prevalence of PSs allowing an individual analysis of the structures on the ground, meanwhile the DS methods have similar scattering properties and can be used together in order to analyse the deformation even in rural areas where there are low PSs density. Small Baseline Subset (SBAS) method is include in DS methods, SBAS is an of multi-temporal InSAR technique for detecting deformations with millimetre precision using a stack of SAR interferograma (Virk et al., 2018)
For DInSAR technique two different interferometric pairs have been analysed (i) 5/8/2021 and 16/09/2021 in descending orbit and (ii) 09/08/2021 and 14/09/2021 in ascending orbit. The software used for the process was SNAP 8.0 (ESA). In SBAS method two different dataset was analysed (ascending and descending orbit), (a) 24 images of relative orbit 60 Sentinel-1A/B TOPSAR (c-band) between 5 of May 2021 to 14 September 2021 and (b) 23 images of relative orbit 169 Sentinel-1A/B TOPSAR (C-band) between 1st May 2021 to 16 September 2021. The datasets were processed with GAMMA software.
3. RESULTS AND CONCLUSIONS
DInSAR images in wrapped interferograms in ascending and descending orbits show fringes in the southern part of La Palma, these patterns of the fringes are not identical between the because they cover different periods, however, the geographical location of the patterns coincide (Cumbre Vieja volcanic complex in the South of the island).
The SBAS estimated deformation velocities in ascending and descending dataset show an uplift trend up to 5 cm in the southern area, it is possible observe the deformation trend have two different stages, a first period of rest with maximum downlift and uplift of 1 cm and a second period between de last days of August until the end of the studied period (mid of September) when an abrupt uplift started with maximum deformation of 5cm.
In this study it is possible to observe that SAR interferometry (conventional and multi-temporal) allow us to know that eruption of Cumbre Vieja in La Palma was preceded by a prior deformation process that is an obvious symptom of a volcanic unrest and these techniques can be used operationally in early warning system with the aim of taking measures in order to mitigate volcanic risk.
4. REFERENCES
Anguita, F., & Hernán, F. (1975). A propagating fracture model versus a hot spot origin for the Canary Islands. Earth and Planetary Science Letters, 27(1), 11-19. https://doi.org/10.1016/0012-821X(75)90155-7
Carracedo, J. C., Day, S., Guillou, H., Badiola, E. R., Canas, J. A., & Torrado, F. P. (1998). Hotspot volcanism close to a passive continental margin: the Canary Islands. Geological Magazine, 135(5), 591-604. https://doi.org/10.1017/S0016756898001447
Curlander, J., McDonough, R. (1991). Synthetic aperture radar: Systems and signal processing. John Wiley and Sons. ISBN: 978-0-471-85770-9
Morgan, W. J. (1983). Hotspot tracks and the early rifting of the Atlantic. Tectonophysics, 94, 123-139. https://doi.org/10.1016/B978-0-444-42198-2.50015-8
Staudigel, H., & Schmincke, H. U. (1984). The pliocene seamount series of la palma/canary islands. Journal of Geophysical Research: Solid Earth, 89(B13), 11195-11215. https://doi.org/10.1029/JB089iB13p11195
Staudigel, H., Feraud, G., & Giannerini, G. (1986). The history of intrusive activity on the island of La Palma (Canary Islands). Journal of Volcanology and Geothermal Research, 27(3-4), 299-322. https://doi.org/10.1016/0377-0273(86)90018-1
Virk, A. S., Singh, A., & Mittal, S. K. (2018). Advanced MT-InSAR landslide monitoring: Methods and trends. J. Remote Sens. GIS, 7, 1-6. https://doi.org/ 10.4172/2469-4134.1000225
Tailings are the main waste stream in the mining sector and are commonly stored behind earth embankments termed as Tailings Storage Facilities (TSFs). The failure of a tailings dam can cause ecological damages, economic loss and even casualties. The Tailings Dam Failure (TDF) in Brumadinho (Brazil) is one of the most recent and largest TDFs and caused at least 270 casualties, economic loss, and ecological damages. Earth observation can contribute to disaster risk reduction after TSFs throughout different phases of the disaster management cycle by providing timely and continuous information about the situation on-site.
We exploited and compared different processing techniques for Sentinel-1 data to extract information for rapid mapping activities. Regarding incoherent change detection algorithms, we calculated the log ratio of intensity and the intensity correlation normalised difference, while a normalised coherence difference and a multi-temporal approach were tested as an instance for coherent change detection algorithms. All algorithms were tested regarding their informative value using the Receiver Operating Characteristic curve. The analysis showed that incoherent methods delivered a better basis for rapid mapping activities in this case with an Area Under the Curve of up to 0.849 under a logistic regression classifier. The dense vegetation cover in this region caused low coherence values also in non-affected areas, which made the coherence-based methods less meaningful.
For long term monitoring of the vegetation cover after the TDF, the Standard Vegetation Index (SVI) was calculated in the Google Earth Engine based on 16-day Enhanced Vegetation Index data captured by the MODIS sensor. Even though the SVI is commonly used for drought monitoring, we tested its capabilities for recovery monitoring in Brumadinho. The TDF caused a severe drop in the SVI values, which remained at a low level. The analysis shows that the vegetation cover has not reached the pre-TDF conditions yet.
The presentation focuses on the results coming from the Sentinel-1-based mapping approach as well as the possibilities and limitations of vegetion recovery monitoring with MODIS data, but also briefly discusses the potential of a GIS-based modelling approach to emphasise the ubiquity of geospatial data throughout the disaster management cycle regarding TDFs.
Landslides are one of the most dangerous and disastrous geological hazards worldwide, posing threats to human life, infrastructures and to the natural environment. In this domain it was initiated a joint project between Politecnico di Milano, Italy and Hanoi University of Natural Resources and Environment, Vietnam. The project is funded on the Italian side by the Ministry of Foreign Affairs and International Cooperation (MAE) and on the Vietnamese side by …... Its main focus is on the problem related to the landslides phenomenon which is relevant in both countries. The goal is to join efforts and experience in the field of geodata science, focusing on the most innovative approaches and designing and implementing sustainable new observation processing strategies. These include studying and applying new techniques for landslide susceptibility mapping through machine learning algorithms; and landslide displacement monitoring through earth observation satellite and UAV data, and citizen science applications for thematic data collection. Moreover, the project has an important consequence also in building new capacities, which will be transferred into the universities’ teachings and professional refresher training, with a direct impact on students and an indirect influence on technology transfer outside the academic environment.
Currently the project has been ongoing for almost a year and has already achieved the target milestones where the main results reached to the current date can be presented into four main tracks: (1) susceptibility mapping, (2) citizen science, (3) landslide monitoring, and (4) capacity building.
1. Susceptibility mapping.
Landslide susceptibility mapping is a topic of crucial importance in risk mitigation. A machine learning approach based on the Random Forests algorithm is adopted to produce landslide susceptibility maps over two areas in Northern Lombardy, Italy (Val Tartano and Upper Valtellina). Random Forests algorithm has been employed, because it has already proven its good performance in the field of landslide susceptibility analysis. As per standard procedure in susceptibility mapping, a landslide inventory (records of past events) usually is used to feed a model with information about the presence of an event; however, in many cases the information of absence is often neglected and usually it is represented by simply including areas for which are missing landslide records. As it can be considered an important aspect, it was introduced an innovative factor, namely the No Landslide Zone (NLZ) which was defined by geological criteria. The main aim of its introduction is to determine areas with a very low possibility of landslides. For that purpose, it was defined a threshold combining slope angle and Intact Uniaxial Compressive Strength of the terrain lithology:
(slope < 5°) OR [(5° < slope < 15° OR slope > 70°) AND (IUCS > 100MPa)]
Upon verification of its consistency the NLZs depicted an error in the margin of 1.7% for Upper Valtellina and 0.5% for Val Tartano. By these means, the model was provided with information about landslide absence in addition to that of past landslide events. The resulted susceptibility maps (i.g., Figure 1) were subsequently validated with state-of-the-art metrics, depicting very satisfactory results when NLZ was included.
2. Citizen science.
Landslide inventory is always a key factor in the hazard studies and as such it is crucial to be as a complete and up-to-date as possible. Most of the times they are lacking some past events, or simply the provided attributes are incomplete. In order to allow faster and more complete landslide data collection, it was developed an open-source thematic mobile application based on citizen science approach. The app allows any user with a mobile device to map and add information about past landslides, by sharing the location of it and compiling a standard geological questionary. Naturally, potential citizens that can contribute may have various levels of knowledge about the landslide phenomena, which was taken into account in the app by choosing between questionnaires related to non-experience users or to professional one. For accessing the collected data were developed two means. The first one is in the form of a plugin for QGIS which allows the user to directly download locally the collected records, including the landslides’ locations and related information. The second distribution mean is through a web application which allows simple data exploration in a map or tabular views (Figure 2). In addition, the webapp can visualize some statistics for the observations using the collected fields or to create a dashboard for a specific landslide.
3. Landslide monitoring.
Whilst susceptibility studies can be of great aid in preventing threats posed by future events, active landslides need to be monitored to reduce the risk of damages and casualties. With this aim, this work proposes a way to compute landslide displacements through time, by exploiting the great availability of high-quality multispectral satellite images. The developed procedure produces maps of displacement magnitude and direction by means of local cross-correlation of Sentinel-2 images (Figure 3). The Ruinon landslide, an active landslide in Upper Valtellina, was analyzed during two different time windows (yearly analysis between 2015 and 2020, monthly analysis in July, August and September 2019). The main preprocessing steps are starting from creating a suitable multi-temporal stack according to the AOI and cloud cover; image co-registration to ensure that the images become spatially aligned so that any feature in one image overlaps as well as possible its footprint in all other images in the stack; histogram matching to transform one image so that the cumulative distribution function (CDF) of values in each band matches the CDF of bands in another image. The main processing is based on the Maximum Cross-Correlation procedure implemented on master-slave couples of images. The approach needs an optimal moving window to test whether a location (pixel) from master is at the corresponding location (pixel) in the slave image, or it is displaced in the boundaries of the search window. The outputs are shifts (in pixels) in X and Y directions which are actually the distances required to register the window of the slave with the one of the master. The spatial resolution of Sentinel-2 images can be considered a bit lower for the landslide’s size under considerations. However, the implemented approach depicted the most major displacements during the landslide’s most active periods. To compare and evaluate the performance of the cross-correlation approach were used products from photogrammetric point cloud comparisons (provided by the local environmental agency ARPA Lombardia) created from UAV observations in periods close to the considered ones for satellite monitoring.
4. Capacity building.
In order to transfer the knowledge and experience, from the project activities, to students it was organized a joined course activities between Italian and Vietnamese partner universities, which are offered to 50 students from both countries. The activities comprehended two preparatory webinars that presented the problem of landslides in Vietnam and Italy. In addition, practical sessions are offered to all students involved to ensure a homogeneous basic preparation adequate to face the proposed project. The project focuses on the creation of landslide susceptibility maps and their presentation in a webGIS. Where the purpose of the project proposed is to analyze case studies, both in Italy and Vietnam, based on the new observation processing GIS strategies designed and implemented in the framework of the Bilateral Scientific Research project. The students are tutored together by Italian and Vietnamese tutors. Where it is expected that the outcomes from students’ work to be presented during a workshop organized by the project partners.
The analysis of A-DInSAR Time Series (TS) is an important tool for ground displacement monitoring and the TS interpretation is useful to understand the kinematics of especially slow-moving processes (landslides) and the relation with the triggering factors (heavy rainfall, snow). The aim of the work is to develop a new statistical methodology that allows to classify TS trend (uncorrelated, linear, non-linear) of large datasets of any type of satellite characterized by low or high-temporal resolution of measures; to retrieve breaks in TS displacements for non-linear deformation; to furnish the descriptive parameters (beginning and end of the break, length in days, cumulative displacement, average rate of displacement) in order to characterize the magnitude and timing of changes in ground motion. The methodology has been tested in Piemonte region, in north-western Italy, which is very prone to slow-moving slope instabilities. Two Sentinel-1 dataset with high-temporal resolution of measures (6-12 days) are available for this area covering the period 2014-2020. Compared to other methods which have been developed to examine TS, the TS statistical analysis in this methodology is based on the daily displacement (mm) rather than the average velocity (mm/yr). This analysis is possible thanks to the availability of Sentinel-1 data with high-temporal resolution of measures (6-12 days) that provides a sampling frequency enough to track the evolution of some ground deformations and therefore it can be considered as a “near-real-time monitoring”. Site-specific or regional site-scale event detection thresholds should be calibrated according to geological-geomorphological processes and characteristics of the study area. Moreover, results must be, where possible, also confirmed by in situ instruments and events already identified since there may be an overestimation of events detected by the methodology. This new methodology applied to Sentinel-1 will provide a new tool both for back analysis and for near real-time monitoring of the territory not only as regards the characterization and mapping of the kinematics of the ground instabilities but also in the assessment of hazard, risk and susceptibility, becoming a supporting and integrated tool with conventional methods for planning and management of the area. Moreover, this method can be useful to understand where acceleration events occurred furnishing a further validation of the real kinematic behaviour in correspondence of each test-site and where it is necessary to do further investigation. The methodology has been tested on areas prone to slow-moving landslides, but it can be applied to any areas to detect any ground instability such as subsidence.
Introduction
The paper presents the results obtained from Digital Image Correlation (DIC) analyses carried out with the intention of mapping the hazards and geological risks potentially impacting a large infrastructure project in Africa. Specifically, the processing was carried out with the aim of quantifying and understanding the direction and direction of migration of dune fields. Unstable sandy elements such as dunes can cause various problems for infrastructures. The analysis was performed by IRIS, an innovative software developed by NHAZCA S.r.l., Startup of Sapienza University of Rome, designed for PhotoMonitoring applications. The analysis was carried out by using Open Source satellite Multispectral images, provided by the ESA Sentinel constellation ( Sentinel 2). PhotoMonitoring is a new monitoring solution that exploits the widespread use of optical/multispectral sensors around the world to obtain information about changes or displacements in the terrain, making it an ideal tool for studying and monitoring surface deformation processes in the context of land and structure control. PhotoMonitoring is based on the concept of "digital image processing", i.e. the manipulation of digital images to obtain data and information. Analyses can be carried out on datasets of images acquired from the same type of platform, on the same area of interest, at different times, and can be conducted using specific algorithms that allow the evaluation of any variation in radiometric characteristics (Change Detection) and/or the displacement occurred in the time interval covered by the acquisition of images (Digital Image Correlation). Through these applications it is possible to study the evolution and significant changes of the observed scenario, therefore, when applied to Earth Observation they allow to better map geological and hydrogeological hazards, understanding the evolution and causes of the processes in progress. Different digital approaches can be used to analyze and manipulate available images and different types of information can be extracted depending on the type of image processing chosen as shown by [1]. Basically, digital image processing techniques are based on extracting information about changes in the terrain by comparing different types of images (e.g. satellite, aerial or terrestrial images) collected at different times over the same area and scene. [2].
Material and Methods
DIC (Digital Image Correlation) is an optical-numerical measurement technique capable of providing full-field 2D surface displacements or deformations of any type of object. The deformations are calculated by comparing and processing co-registered digital images of the surface of the same "object" collected before and after the deformation event [2]. Digital Image Correlation (DIC) allows to quantitatively evaluate the displacement and deformations occurred between two images acquired at different times by analyzing the different pixel blocks and allowing to obtain a resolution that can go up to 1/10 of a pixel (Fig.1).
This technique is affected by environmental effects caused by different atmospheric and lighting conditions, different temperatures and problems inherent in the camera's viewing geometry. Using high resolution, accurately positioned and aligned imagery, it is possible through DIC to identify differences, deformations and changes in the observed scenario with high accuracy. Recently, several authors have presented interesting results derived from the application of DIC analysis with satellite imagery for landslide displacement monitoring [3,5] [6].
The analysis was carried out on three contiguous areas and involved the use of 3 different pairs of images for a total area of approximately 30.000 Square kilometers. In particular, the analysis was carried out on Sentinel-2 images, with a Pansharpened resolution of 10 x 10 m, taken over a period of one year from July 2020 to July 2021.
The IRIS software allows Digital Image Correlation (DIC) analyses to be carried out using different types of algorithms. In this case the analysis was carried out using the Phase Correlation (PC) algorithm [7] which is based on a frequency domain representation of the data, usually calculated through fast Fourier transforms, with a floating window of 16 pixels (Fig.2).
Result and Discussion
The results obtained are displacement maps representing the position of the main dune fields and the magnitude (depicted according to a metric color scale) and direction (represented by arrows) of dune migration during the studied period. In particular, two large corridors characterized by strong southward dune movements were identified. For the northernmost corridor, the analyses allowed the assessment of an average displacement rate of about 80 m per year, with peaks of displacement up to 100 m. For the southern corridor, on the other hand, lower displacement rates were measured, averaging about 50 m per year. The analyses also showed a good correlation between the direction of displacement and the dominant wind direction for these areas (Fig. 3).
Conclusion
The PhotoMonitoring analysis presented in this paper allowed to map the presence of dune fields and to quantify their annual displacement rate. This analysis carried out on Open Source Sentinel-2 images and with a new generation software, IRIS, developed by NHAZCA S.r.l., Startup of Sapienza University of Rome, allowed to identify and map some geological risks for a strategic infrastructure in the planning phase. The results obtained allow us to fully understand the potential of Earth Observation techniques, and more specifically of IRIS and satellite Photomonitoring, now a reliable and versatile tool that allows the monitoring and study of the impact of Geohazards and geological risks such as Earthquakes, Landslides, Floods (Fig.4) and through data from different sensors (Optical, Radar, Laser).
[1] Ekstrom, M. P. (2012). Digital image processing techniques (Vol. 2). Academic Press.
[2] Caporossi, P., Mazzanti, P., & Bozzano, F. (2018). Digital image correlation (DIC) analysis of the 3 December 2013 Montescaglioso landslide (Basilicata, southern Italy): results from a multi-dataset investigation. ISPRS International Journal of Geo-Information, 7(9), 372.
[3] Bontemps, N., Lacroix, P., & Doin, M. P. (2018). Inversion of deformation fields time-series from optical images, and application to the long term kinematics of slow-moving landslides in Peru. Remote sensing of environment, 210, 144-158.
[4] Pham, M. Q., Lacroix, P., & Doin, M. P. (2018). Sparsity optimization method for slow-moving landslides detection in satellite image time-series. IEEE Transactions on Geoscience and Remote Sensing, 57(4), 2133-2144.
[5] Lacroix, P., Araujo, G., Hollingsworth, J., & Taipe, E. (2019). Self‐Entrainment Motion of a Slow‐Moving Landslide Inferred From Landsat‐8 Time Series. Journal of Geophysical Research: Earth Surface, 124(5), 1201-1216.
[6] Mazzanti, P., Caporossi, P., & Muzi, R. (2020). Sliding time master digital image correlation analyses of cubesat images for landslide monitoring: The Rattlesnake Hills landslide (USA). Remote Sensing, 12(4), 592.
[7] Tong, X., Ye, Z., Xu, Y., Gao, S., Xie, H., Du, Q., ... & Stilla, U. (2019). Image registration with Fourier-based image correlation: A comprehensive review of developments and applications. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(10), 4062-4081.
Wildfire is a complex Earth system process influencing the global carbon cycle, the biosphere and threatening the safety of human life and property. There are three prerequisites of wildfire: fuel availability, an ignition source and special atmospheric conditions to spread the wildfire. Vegetation, hydrologic and atmospheric conditions are considered as influential conditions by providing fuels, fire preconditions and intensifying fires. It is necessary and urgent to improve our understanding of wildfire in order to predict their occurrence. Many studies focused on wildfire prediction or mapping through regression or machine learning methods. Typically, these studies were limited to regional scales, considered an insufficient number of wildfire conditions, and neglected information about the time lag between wildfire and related conditions and therefore provided only inaccurate predictions. In this study we applied the PCMCI approach, a causal network discovery method which in a first stage identifies relevant conditions based on PC (Peter and Clark) conditions, in a second stage a MCI (Momentary Conditional Independence) conditional independent test is used to control false positive rates, to detect casual relationships and reveal time lags between wildfire burned area and atmospheric, hydrologic as well as vegetation conditions. We built the causal networks for each subregions (28 climate zones and 8 vegetation types) globally. The results show that at global scale atmospheric and hydrologic conditions are usually dominant for wildfires, while vegetation conditions show importance in several special regions, e.g. Africa near the equator and middle-high latitudes regions. The time lags between wildfires and vegetation conditions are larger than those of atmospheric and hydrologic conditions which could be related to vegetation growth and fuels accumulation. Our study emphasizes the importance of taking vegetation monitoring into account when predicting wildfires especially for longer lead time forecasts, while for atmospheric and hydrological conditions shorter time lags should be focused on.
Earthquakes are tremendous natural disasters that cause casualties and damages. During a seismic event, a fast damage assessment is an important step for post-disaster emergency response to reduce the impact of the disaster.
Within this context, remote sensing plays an important role. The optical sensors data is one of the possible tools due to its simple interpretability. However, optical radiation is severely affected by cloud cover, solar illumination, and other adverse meteorological conditions that make sometimes difficult information extraction. In contrast, radar sensors ensure all-day and almost all-weather observations, together with a wide area coverage and the Synthetic Aperture Radar (SAR), due to its almost all-weather and all-day fine spatial resolution imaging capabilities, can be a very useful tool to observe earthquake damages.
SAR observation of damaged areas is not straightforward, and it is typically based on bi-temporal approaches that contrast features derived from SAR imagery collected before the earthquake with the peer ones evaluated after the earthquake [1][2][3]. Recently, features evaluated from dual-polarimetric SAR measurements have been proven to be very effective and accurate to map earthquake-induced damages [4][5][6].
However, the urban area is an inherently complex environment that trigger artifacts in the SAR image plane due to foreshortening, shadowing, or layover [7]. These issues have been shown to be mitigated when using SAR imagery collected under ascending and descending passes.
Within this context, in this study a quantitative analysis of earthquake-induced damages is performed using dual polarimetric (DP) SAR imagery collected under ascending and descending passes and by contrasting SAR-derived info with ground information. First, a change detection approach, based on the reflection symmetry, i.e., a symmetry property that holds when dealing with natural distributed scenarios and results in uncorrelated co- and cross-polarized channels, is used to detect the changes that occurred after the earthquake. Then, an unsupervised classifier based on Fuzzy c-means clustering is developed to associate changes in a proper class of damage. Finally, the ascending and descending damage maps are properly combined and contrasted with the ground truth obtained by in-situ measurements. Preliminary results, obtained processing a set of dual polarimetric (DP) SAR data collected at C-band from Sentinel-1 mission in the Central Italy area affected by the earthquake in 2016, show that the joint use of dataset collected in ascending and descending orbit allows improving the results in terms of overall accuracy.
[1] C. Yonezawa and S. Takeuchi, “Decorrelation of SAR data by urban damages caused by the 1995 Hyogokennanbu earthquake,” International Journal of Remote Sensing, vol. 22, no. 8, pp. 1585–1600, 2001.
[2] W. Manabu, T. R. Bahadur, O. Tsuneo, F. Hiroyuki, Y. Chinatsu, T. Naoya, and S. Sinichi, “Detection of
damaged urban areas using interferometric SAR coherence change with PalSAR-2,” Earth, Planets and Space,
vol. 68, no. 1, pp. 131, July 2016.
[3] S. Stramondo, C. Bignami, M. Chini, N. Pierdicca, and A. Tertulliani, “Satellite radar and optical remote sensing for earthquake damage detection: Results from different case studies,” Int. J. Remote Sens., vol. 27, no. 20, pp. 4433–4447, 2006
[4] E. Ferrentino, F. Nunziata, M. Migliaccio, and A. Vicari, “A sensitivity analysis of dual-polarization features to damage due to the 2016 Central-Italy Earthquake,” Int. J. Remote Sens., vol. 0, no. 0, pp. 1–18,
2018.
[5] E. Ferrentino, A. Marino, F. Nunziata, and M. Migliaccio, “A dual–polarimetric approach to earthquake
damage assessment,” Int. J. Remote Sens., vol. 40, no. 1, pp. 197–217, 2019.
[6] E. Ferrentino, F. Nunziata, C. Bignami, L. Graziani, A. Maramai, and M. Migliaccio, “Multi-polarization c-band sar imagery to quantify damage levels due to the central italy earthquake,” International Journal of Remote Sensing, vol. 42, no. 15, pp. 5969–5984, 2021.
[7] T.M. Lillesand, R.W. Kiefer, J.W. Chipman, “Remote Sensing and Image Interpretation”, 7th ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015
Unstable slopes in critical infrastructures such as reservoirs usually lead to risky situations that may imply a large amount of material, economic and even human losses. Remote Sensing techniques have proven to be very useful tools to avoid or minimize these disasters. One of these techniques is satellite radar interferometry (InSAR), which is capable of detecting millimetre movements of the ground at a high spatial and temporal resolution.
A significant improvement for InSAR is given by the recent C-band sensors on-board the Sentinel-1A and Sentinel-1B satellites. Sentinel-1 satellites have improved data acquisition and analysis, as their images are free-of-charge and offer wide area coverage at a high temporal resolution (sampling of 6 days) and high accuracy (up to 1 mm/year). But undeniably, other initiatives such as the European Space Agency (ESA)’s Geohazards Exploitation Platform (GEP) have entailed a meaningful advance for satellite Earth Observation (EO), especially for users with no capability to perform independent InSAR processing. GEP enables the exploitation of satellite images by providing several automatic InSAR processing services/thematic apps, mainly for geohazard monitoring and management. The Sentinel-1 CNR-IREA SBAS service is one of the GEP thematic apps that consist on a processing chain for the generation of Earth time series of displacement and mean velocity maps of displacement.
In this work, we made use of the CNR-IREA SBAS’s GEP service to perform InSAR analyses in one of the most critical infrastructures of Southern Spain: the Rules Reservoir. Therefore, we detected three active landslides within the slopes of the reservoir: the Lorenzo-1 Landslide, the Rules Viaduct Landslide and the El Arrecife Landslide. The first two are rotational landslides (the surface of rupture is curved) and they are affecting the N-323 National Road and the southern abutment of the Rules Viaduct (Highway A-44), respectively. The InSAR displacement rates are up to 2 cm/yr for the Lorenzo-1 Landslide and up to 2.5 cm/yr for the Rules Viaduct Landslide. Furthermore, the Time Series (TS) of accumulated displacement patterns of both landslides show a correlation with changes in the water level of the reservoir: the movement is accelerated with declines of the water level of the reservoir.
On the other hand, the El Arrecife Landslide has a translational character (the surface of rupture is planar) and therefore, it presents a potential hazard of experiencing a critical acceleration and a partial or total rupture of the slope. This would generate a collapse of a slide mass into the reservoir, what would have devastating consequences (for example, a massive flash flood downstream). InSAR was the technique that first revealed us the existence of this landslide, with a mean displacement rate of 2-2.5 cm/yr, being up to 6 cm/yr in the landslide's foot. Because of its potential hazard for the reservoir, we applied other techniques to further characterised the landslide: geological and geomorphological mapping, kinematic analysis for slope instability, volume estimation of the landslide, photogrammetry, and geophysical techniques (Ground Penetrating Radar). Through the latter one, we estimated a vertical movement of the landslide around 2 cm/yr, that is well-correlated with the rate obtained by InSAR. As the other landslides, the movement of the El Arrecife Landslide foot is accelerated with the reservoir water level declines.
With the data presented, we provide a first view of the nature and displacement of these landslides, as well as the hazard that they imply to the Rules Reservoir. Having done this, we consider essential to keep monitoring the landslides through InSAR and other in-situ monitoring techniques. In such way, possible pre-failure precursors of a rapid acceleration could be identified far enough in advance to avoid irreversible damages in the reservoir and related infrastructures. A continuous monitoring of the landslides is the key to conduct to a suitable and safe management of the reservoir, especially for water discharges.
This work has been developed in the framework of the RISKCOAST project (SOE3/P4/E0868), financed by the Interreg Sudoe Program (3rd call of proposals) of the European Regional Development Fund (ERDF).
Mapping landslides after major triggering events (earthquake, large rainfall) is crucial for disaster response, hazard assessment, as well as for having benchmark inventories on which landslide models can be tested. Numerous studies have already demonstrated the utility of very-high resolution satellite and aerial images for the elaboration of inventories based on semi-automatic methods or visual image interpretation. However, while manual methods are very time consuming, faster semi-automatic methods are rarely used in an operational contexts, partly caused by data access restrictions on the required input (i.e. VHR satellite images) and by the absence of dedicated services (i.e. processing chain) available for the landslide community.
From a data perspective, the free access to the Sentinel-2 and Landsat-8 missions offers opportunities for the design of an operational service that can be deployed for landslide inventory mapping at any time and everywhere on the Earth. From a processing perspective, the Geohazards Exploitation Platform –GEP– of the European Space Agency –ESA– allows the access to processing algorithms in a high computing performance environment. And, from a community perspective, the Committee on Earth Observation Satellites (CEOS) has targeted the take-off of such service as a main objective for the landslide and risk community.
Within this context, we present a largely automatic, supervised image processing chain for landslide inventory mapping. The workflow includes:
- A segmentation step, which performances is optimized in terms of precision and computing time and with respect to the input data resolution.
- A feature extraction step, consisting in the computation of a large set of features (spectral, textural, topographic, morphometric) for the candidate segments to be classified;
- A per object classification step., based on the training of a random-forest classifier from a sample of manually mapped landslide polygons .
The service is able to process both HR (Sentinel-2 or Landsat-8) and VHR (Pléiades, SPOT, Planet, Geo-eyes or every multi-spectral image with 4 bands, blue, green, red, NIR) sensors. The service can be operated in two modes (bi-dates, single-date; the bi-dates mode is based on change detection methods with images before and after a given event, whereas the mono-date mode allows a mapping of landcover at any given time).
The service is presented on use cases with both medium resolution (Sentinel-2, Landsat-8) and high-resolution (Spot-6,7, Pléiades) images corresponding landscapes recently impacted by landslide disasters (e.g. Haiti, Mozambique, Kenya). The landslide inventory maps are provided with uncertainty maps that allows identifying areas which might require further considerations.
Although the initial focus and the main usage of ALADIM is associated with the landslide analyses, there is a large panel of possible applications. The processing chain was already tested in different others contexts (urbanization, deforestation, agricultural land change, …) with very promising results.
Many cities are built on or near active faults, which pose seismic hazard and risk to the urban population. This risk is exacerbated by city expansion, which may obscure signs of active faulting. Here we estimate the risk to two major capital cities along the northern Tien Shan. Bishkek is the capital of Kyrgyzstan with a population just under one million and Almaty is Kazakhstan’s largest city with over 2 million inhabitants. Major faults of the Tien Shan, Central Asia, have long repeat times, but fail in large (Mw 7+) earthquakes. In addition, there may be smaller, buried faults off the major faults that are not properly characterized or even recognized as active. These all pose hazard to cities along the mountain range front. We explore the seismic hazard and risk for these pair of major cities from devising a suite of realistic earthquake scenarios based on historic earthquakes in the region and improved knowledge of the active faulting. We use previous literature and fault mapping, combined with new high-resolution digital elevation models to identify and characterise faults that pose a risk to the cities. By making high-resolution Digital Elevation Models (DEMs) from SPOT and Pleiades stereo optical satellite imagery, we identify fault splays near and under Almaty. We assess the feasibility of using DEMs to estimate city building heights, aiming to better constrain future exposure datasets. Both Pleiades and SPOT-derived DEMs find accurate building heights of the majority of sampled buildings within error. For Bishkek, we model historical events and hypothetical events on a variety of faults that could plausibly host significant earthquakes. This includes proximal, recognised, faults as well as a fault under folding in the north of the city that we identify using satellite DEMs. We then estimate the hazard (ground shaking), damage to residential buildings and losses (economical cost and fatalities) using the Global Earthquake Model OpenQuake engine. In both cases, we find that even moderately sized earthquake ruptures on faults running along or beneath the cities have the potential to damage ten thousand buildings and cause many thousands of fatalities. This highlights the importance of characterizing location, extent, geometry, and activity of small faults beneath cities.
The combined effects of extreme rainfall events and anthropogenic activities are increasing the landslide hazard worldwide. Predicting in advance when and where a landslide will occur is an ongoing scientific challenge, which is related to an accurate in time and space analysis of the landslide cycle and a thorough understanding of all associated triggering factors. Between mid-March and the beginning of April 2019, almost the whole of Iran was affected by intense record rainfall leading to thousands of slope failures. In particular, a catastrophic landslide occurred in Hoseynabad-e Kalpush village, Semnan, Iran, where more than 300 houses were damaged, of which 160 completely destroyed. Several questions were raised in the aftermath of the disaster as to whether the landslide was triggered by the heavy precipitation only or by the additional load and seepage from the nearby dam built-in 2013 on the opposite side of the slope.
In this study, we use a multi-scale and multi-sensor data integration approach using satellite and in-situ observations to investigate the pre, co, and post-failure of the Hoseynabad-e Kalpush landslide and assess the role of the potential external factors in triggering the disaster event. Multi-temporal SAR Interferometry observations detected precursory deformations on the lower part of the slope that started in April 2015, accelerated in January 2019 following the exceptional rainy season, and culminated in a slope failure, measured with optical cross-correlation technique, of more than 35 m in the upper part. Subsequently, the lower and middle sections of the landslide showed instability with a maximum cumulative displacement of 10 cm in the first 6 months. To evaluate the role of meteorological and anthropogenic conditions in promoting the slope instability, we integrate the geodetic observations with 20 years rainfall dataset from the Climate Hazards Group InfraRed Precipitation (CHIRP) with Station data, daily in-situ records of the dam reservoir water levels available from September 2014 until August 2019, and cloud-free Landsat-8 images acquired starting from April 2013 integrated with Shuttle Radar Topography Mission elevation data to indirectly estimate the previous to the recorded dam water levels.
The observed pre-failure displacements are a clear indication of the gradual weakening of the shear strength along a pre-existing shear surface or a ductile deformation within a shear zone, which led to the failure. The initialization of the creep followed the reservoir refilling cycle of 2015, while, apart from the final acceleration phase, no clear correlation with the precipitation was observed. The hydraulic gradient due to the dam water level generated a water flow through the porous soil, with field evidence of leakage and piping processes, which permanently altered the hydraulic conditions and therefore mechanical properties of the terrain. Under these already aggravated hydraulic conditions, cumulative rainfall acted from one side by increasing further the reservoir water level, and therefore gradient, and from the other excess pore water pressure on the slope and acting as an additional down driving weight.
While the location of deep-seated landslides can be predicted using only remote sensing geodetic measurements, the time predictivity of the failure is still unreliable especially for slopes where more external factors interact. Hoseynabad-e Kalpush landslide case study has an important relevance also for other parts of the world where artificial reservoirs might act as triggering factors for the slope instability.
Protecting the population and their livelihood from natural hazards is one of the central tasks of Swiss state. Efficient prevention, preparation and intervention measures can be used to pre-vent or at least limit potential material damage and fatalities as a result of natural hazards. Warnings and alerts are particularly cost-effective instruments for reducing damage, as they allow emergency personnel and the population to take the prepared measures.
The Swiss Federal Office of Topography (swisstopo) therefore procures processed InSAR data to detect any changes in the terrain of the whole of Switzerland.
The object of the service is the procurement of processed InSAR data for the entire perimeter of Switzerland. The data provided by the Sentinel-1 (S1) SAR satellite constellation as part of the European Union’s Copernicus Earth observation programme are processed as the data basis for the Swiss-wide monitoring of surface motion.
The service implementation includes the analysis of all the available historical (S1), from 2014 up to November 2020, followed by annual updates, at least up to 2023. The frequency of the periodical updated could increase, up to monthly updated, if needed or considered valuable from swisstopo.
The area of interest is covering Switzerland and Liechtenstein, including a 5 km buffer, for a total surface of approximately 50’000 km2.
This area is covered by five different S1 tracks, two ascending and three descending, from October 2014 up to now. The approximate number of acquisition per track is about 300, characterized by a 6-day revisiting time, which is showing a regular sampling with no data gaps starting from November 2015.
The end-to-end workflow of the production chain includes the following steps:
- S1 Data Ingestion, transferring S1 data from external repositories into the service storage facilities;
- Core Processing
- Quality Control procedures for ensuring product quality before delivery the results to swisstopo.
Southern Switzerland is characterized by prominent topography, as it includes more than the 13% of the Alps, comprising several peaks higher than 4’000 m above sea level. In fact, the Alps cover 60% of Switzerland. Therefore, a preliminary analysis has been addressed on the creation of layover and shadow maps, for each S1 relative orbit, considering both the ascending and descending geometries. This step is helping to identify the portions of the study area where the combination of topography and the satellite acquisition geometry do not allow getting information from InSAR techniques.
Additionally, the vast mountainous areas are often affected by seasonal snow cover, which, in turn, is affecting S1 interferometric coherence over long periods, resulting in loss of data for parts of the year. To handle the periodical data decorrelation or misinterpretation of the data phase information during the snow period, a specific strategy to correctly threat these circumstances has been designed.
The Core Processing is responsible for the generation of all required products, operating on S1 and ancillary data. The deformation products will be obtained exploiting a combination of both Small Baseline subset (SBAS) and Persistent Scatterers Interferometry (PSI) methods, in order to estimate the temporal deformation at both DS and point-like PS. In the following, the terms low-pass (LP) and high-pass (HP) will be used to name the low spatial resolution and residual high spatial frequency components of signals related to both deformation and topography.
The role of the SBAS technique is twofold: on the one hand, it will provide the LP deformation time series in correspondence of DS points and the LP DEM-residual topography; on the other hand, the SBAS will estimate the residual atmospheric phase delay still affecting the interferometric data after the preliminary correction carried out by leveraging GACOS products and ionospheric propagation models.
The temporal displacement associated to PS points will be obtained applying the PSI method to interferograms previously calibrated removing the LP topography, deformation and residual atmosphere estimated by the SBAS technique. This strategy connects the PSI and SBAS methods ensuring consistency of deformation results obtained at point-like and DS targets and, therefore, provides better results with respect to the approach of executing the two methods independently from each other.
A key aspect considered in the framework of the project implementation is related to the estimation and corrections of atmospheric effects affecting the area, generally more evident over the mountainous areas.
An initial correction is applied on each interferogram through the Generic Atmospheric Correction Online Service for InSAR (GACOS), which utilizes the Iterative Tropospheric Decomposition model to separate stratified and turbulent signals from tropospheric total delays, and generate high spatial resolution zenith total delay maps to be used for correcting InSAR measurements. This atmospheric calibration procedure is intended as preliminary correction that will be later refined by the data-driven atmospheric delay estimation in order to obtain atmospheric delay maps at a much higher spatial resolution than that achievable by using external data based on numerical weather prediction such as GACOS.
GNSS data provided by swisstopo, consisting in more than 200 points over Switzerland, are used for the products calibration and later for the result validation during the quality control procedure.
The generated products consist of:
- Line-of-Sight (LOS) surface deformation time series for ascending and descending datasets in SAR geometry (Level 2a);
- Line-of-Sight (LOS) surface deformation time series for ascending and descending datasets in map geometry (Level 2b);
- Combination and projection of deformation results obtained from the overlapping ascending and descending datasets to calculate vertical and east-west deformations starting from the LOS results (Level 3).
The quality control (QC) procedures are divided into automatic QC and operator QC. The automatic QC include the analyses of point-wise indicators (coherence maps, precision maps, points density, deformation RMSE with respect to a smooth fitting model), quality indicators at sparse locations (comparison with GNSS data, consistency of stable targets) and other quality indicators (short-time interferogram variograms before and after atmospheric calibration, consistency of overlapping areas). The additional operator QC are focusing on a visual assessment of deformation maps reliability / realism leveraging also on a priori knowledge about the expected deformation behavior.
The results of this service are then going to be delivered to swisstopo that will manage the possibility of sharing the deformation maps through their national geo-portal.
In the last decades, satellite remote sensing played a key role in Earth Observation, as an effective monitoring tool applied to Geo-hazards identification and mitigation, in a global observation framework. Space-borne SAR data and, in particular, the differential Interferometry (InSAR) technique, are very useful for the analysis of long-term or co-seismic crustal movements, for the identification of landslides and subsidence, as well as for determining the current state of magmatic/volcanic systems. Ground displacements can be better estimated by processing a long stack of images using multitemporal InSAR algorithms such as the SqueeSAR, which represents the most advanced technique for ground deformation analysis. In volcanology, considering the difficulties to carry out in situ analysis and the hazard phenomena acting over wide spatial and temporal scales, SqueeSAR® can provide incomparable information on unrest, co-eruptive deformation, and flank motion. Interferometry is also a powerful tool to monitor the evolution of the deformation in a wide-scale range during the eruption days and predict the volcan behavior.
In this work, ground deformation data, derived from Sentinel-1 constellation dataset by means of the SqueeSAR® algorithm, was carried out over the Cumbre Vieja volcano, located in the western part of La Palma Island, in the Canary archipelago. The volcano erupted on 19 September 2021, after a seismic swarm. The ongoing eruption formed a complex cinder cone produced by fire fountain activity, and fed several lava flows affecting over 1000 hectares, that are devastating and burying hundreds of buildings and properties, causing high direct and indirect economic losses.
The final goal is to understand if it is possible to identify signals related to the rise of magma inside the volcanic building, and therefore to define precursor signals to the eruptive activity. Complementary, Classical DInSAR allowed to determine the massive deformation triggered in the eruptive episode, reaching in 6 days more than 30 cm in the Line-Of-Sight of the satellite (LOS) in the area close to the fissure vents.
Analyzing the deformations of the volcano in the year preceding the eruption, the results of the analyzes carried out, allow us to assert that the ground displacements can be considered precursors of the eruption, both in the long and in the short term, allowing to identify the phases of magmatic ascent, up to the opening of the eruptive vent.
There are many geotechnical risks involved in the operation of a Tailings Storage Facility (TSF). Usually designed to withstand tremendous pressure exerted by the deposition of material against its dam walls, TSFs are often structures at risk of experiencing sinkholes on the crest of the dam, or the bulging of the toe due to this exerted pressure. Additionally, depending on the moisture content of the tailings, seepage, overtopping and destruction of liner integrity can pose additional risks. Using a combination of Earth Observation data, SkyGeo has developed an integrated monitoring service for TSFs. The data includes interferograms, coherence, and amplitude maps that are generated from high resolution X-band satellites as well as high resolution optical and open source multispectral data.
This service comprises reports that are generated every 4-7 days using multi-orbit SAR imagery processed with SkyGeo’s proprietary InSAR software. The phase information is used to estimate displacements that are further decomposed into displacement maps that indicate vertical subsidence or East-West motion. Coherence maps are used to track the integrity of the dam walls and as an advance proxy for dam breach situations. Using the SAR amplitude data combined with the multi spectral information from Sentinel-2, the distance or extent of the tailings water accumulation from the dam is computed. This serves as an indicator for potential overtopping incidents or any seepage from the facility. Finally, orthoimagery is also acquired on a quarterly basis by the high resolution optical satellite, Pleiades, to provide context for the monitoring service.
The EO data is integrated into the operations and safety management of the TSF by means of a risk report. The data is checked against thresholds based on engineering criteria and historical baselines of movement by SkyGeo. This is then communicated with the mining staff to provide timely actionable insights. In this way, SkyGeo’s use of EO data for risk management provides additional oversight and acts as a first link in the fence in the safety and management of the tailings facility.
Today, remote sensing is key for the identification, quantification and monitoring of natural hazards. Recent developments in data collection techniques are producing imagery at previously unprecedented and unimaginable spatial, spectral, radiometric and temporal resolution. The advantages of using remotely sensed data vary by topic, but generally include safer evaluation of unstable and/or inaccessible regions, high spatial resolution, spatially continuous and multi-temporal mapping capabilities (change detection) and automated processing possibilities. Of course, as with every method, there are also disadvantages involved with the use of remotely sensed data. These are generally in relation to the lack of ground truth data available during an analysis and to data acquisition costs.
Here we present the use of remote sensing for snow avalanche detection. During the winter season snow avalanches pose a risk to settlements and infrastructure in mountainous regions world-wide. Avalanches affect populated areas and parts of the transport network every year, leading to the damage of buildings and infrastructure and sometimes also to the loss of lives. Avalanche observations are one of the most prized information that avalanche forecasters seek, to form their opinion on the avalanche hazard. Unfortunately, we are only aware of a small fraction of the avalanche occurrences. Novel applications using new Earth Observation satellite capabilities are, therefore, important tools to detect and map avalanches and to characterize avalanche terrain. Detection, mapping and characterisation of avalanches are important for expanding avalanche data inventories. These enable the validation and quality assessment of avalanche danger warnings issued by avalanche warning services.
For an avalanche expert, it takes several hours to visually inspect and map individual avalanche paths. At times this task cannot be accomplished for several days after an avalanche event. Several earlier studies have shown that data from space-borne optical sensors as well as from radar sensors can be used to detect and map avalanche debris. Being able to remotely detect and record avalanche releases aids to target mitigation strategies. While forecasts for avalanche risk management rely mainly on meteorological forecasts, snow cover observations and expert knowledge, satellite-based remote sensing has a large potential in now- and hind-casting. The area covered by remote sensing approaches can be regional to local and stretch over areas where traditionally such measurements are both difficult and time-consuming, or areas that are not accessible at all for in-situ observations.
Here we present the results of several studies on how the analysis of satellite data can yield hind-cast avalanche inventory observations on a regional scale. We have explored the use of imagery from high-resolution and very-high resolution optical satellite data (WorldView, QuickBird, Pléiades) and high-resolution SAR data (Radarsat-2, Sentinel-1), applying automated image segmentation and classification. The results are validated by manual expert mapping.
The country of El Salvador lies on a tectonically active subduction margin with high deformation rates. However, other deformation phenomena dominate the signal detectable by geodetic techniques in certain areas. Identifying active deformation processes such as landslides, which have caused many casualties in the past, results crucial for the safety of people living in these areas. To this date, no study has been performed trying to broadly recognise non-tectonic deforming areas within the whole country using geodetic data.
Here we use satellite interferometric synthetic-aperture radar (InSAR) data to identify ongoing ground deformation across El Salvador. ESA’s Sentinel-1 SAR images have been processed using the web based Geohazard Exploitation Platform (GEP), specifically through the PSBAS (Parallel Small BAseline Subset) processing chain. In total, seven years of data have been processed for each geometry (ascending and descending), including the whole period of Sentinel-1 up to November 2021. The results are then analysed using the ADAtools in order to automatically identify active deformation areas (ADAs) and classify them according to the natural or anthropic causative phenomenon, analysing the behaviour of the deformation signal together with geological and other ancillary information of the study area (Digital Elevation Models, inventories of different geohazards, cadastral inventories, etc). This is followed by a manual supervision. Thus, we identify several ADAs affected by different proposed deformation phenomena, such as landslides, consolidation settlements, land subsidence or subsidence related to geothermal exploitation. We also detect ground deformation potentially related to volcanic activity on the Izalco and San Miguel volcanoes.
We further validate the InSAR time series by comparing them with 8 permanent GNSS stations across El Salvador.
Acknowledging previously unknown processes will help future studies to focus on these areas. This information can be useful for identifying stable areas across the country, allowing to better interpret other data such as GNSS time series. Moreover, eventual monitoring of these phenomena can be of great importance for decision-makers in urban planning and risk prevention policies.
This work has been developed in the framework of project PID2020-116540RB-C22 funded by MCIN/ AEI/10.13039/501100011033 and project CGL2017-83931-C3-3-P funded by MCIN/ AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, as well us under the Grant FPU19/03929 funded by MCIN/AEI/10.13039/501100011033 and by “FSE invests in your future”.
Earthquakes and extreme weather events are responsible for triggering a population of catastrophic landslides in mountainous regions which can damage infrastructure and cause fatalities. In the last decade, an exceptionally high distribution of fatal landslides was observed after the cloudburst event in North India (2013), the Nepal earthquake (2015), the Hokkaido Iburi-Tobu earthquake (2018), Storm Alex in French-Italian Alps (2020), among many others that forced the civil defense authorities to quickly map event landslides over large regions for planning an effective disaster response. These mapping efforts were aided by the increased availability of Earth observation (EO) images from many satellites orbiting on agile platforms or in large constellations, combined with the coordinated efforts of The International Charter Space and Major Disasters members. Now it is possible to obtain data from the affected region in a couple of hours. Synthetic aperture radar (SAR) sensors can even provide data sensed through the clouds during bad weather conditions. However, the landslide mapping process is still predominantly dependent on visual interpretation or semi-automated methods, which can cause a delay of a few days to many months till a near-complete inventory is available. Hence, there is an increased need for a data-agnostic method for rapid landslide mapping. In recent years, deep-learning based methods have shown unprecedented success in image classification and segmentation tasks. They have been adopted for mapping landslides in several scientific studies. However, most of these studies rely on an already existing large inventory for training the deep-learning models, making such methods unsuitable for a rapid mapping scenario.
This work presents an active learning workflow to generate a landslide map from the first available post-event EO data. The proposed method is a multi-step process where we start with an incomplete inventory covering a small region. In subsequent steps, we increase the coverage and accuracy of the landslide map with feedback from an expert operator. We apply our method to map landslides triggered by the Hokkaido Iburi-Tobu earthquake (Japan), which occurred on 5th September 2018. In the next days, the affected region was covered with clouds which prohibited the acquisition of useful data from optical satellites. Hence, we used ALOS-2 SAR data which was available one day after the event. Our results indicate that an active learning workflow has a small reduction in performance compared to a traditionally trained model but eliminates the need for a large inventory for training which is a bottleneck during rapid mapping scenarios.
We present Sentinel-1 measurements of uplift at Sangay volcano, Ecuador, during its recent period of eruptive activity. This most recent eruptive episode began in May 2019 continues through December 2021, and is characterized by 1-10 km high ash plumes, lava flows of several km length, and pyroclastic flows emitting from the summit. The volcano is remote, surrounded by rainforest, limiting access to install ground-based monitoring stations. However, local communities are affected by (lahars, and ash) and distant populations have also been affected by the impact of volcanic ash (on infrastructure and air traffic). In Ecuador, Synthetic Aperture Radar Interferometry (InSAR) is an especially useful technique for monitoring large-scale surface deformation at remote volcanoes, and is an essential complement to ground-based instruments, providing constraints on magma locations and volumes.
We present Sentinel-1 and TerraSAR-X measurements at Sangay Volcano, Ecuador, spanning a period of intense eruption in September 2020. Sentinel-1 time series between August 2019 and September 2020 from 60 descending and 40 ascending images show persistent uplift through this period of eruption, reaching a maximum line-of-sight uplift of 70mm. We use weather models to mitigate atmospheric contributions to phase, and focus our analysis on two particularly large explosions on 08 June and on 19 September 2020. Our preliminary modelling is consistent with a deformation source steadily increasing in volume located within the volcano’s edifice
On 14 August 2021, a Mw 7.2 earthquake struck the Caribbean nation of Haiti. It had a ~10 km deep hypocenter near Petit-Trou-de-Nippes, approximately ~125 km west of the capital, Port-au-Prince. Preliminary ground survey revealed this event induced hundreds of landslides. Most of the landslide activity was centered around the Pic Macaya National Park area. We utilized both synthetic aperture radar (SAR) and optical imagery to generate rapid response products within one day of the event. We used the Semi-Automatic Landslide Detection (SALaD) system to map landslides that were visible in the Sentinel-2 imagery. However, pervasive cloud cover was an issue in most areas, in part due to Tropical Storm Grace which impacted the epicentral area on the 16th of August. Therefore, we also used a Google Earth Engine-based SAR backscatter change methodology to generate a landslide proxy heatmap that highlighted areas with high landslide density underneath the cloud cover. We will report on the accuracy of our optical and SAR-based landslide products and how this information was utilized by relief agencies on the ground. We will also conduct a detailed inventory mapping exercise using high-resolution Planet imagery and automated mapping techniques. We will outline the results from this mapping effort as well as provide a view on opportunities to support rapid response for multi-dimensional geohazard events moving forward.
The field of InSAR has developed significantly over the last thirty years, both from a technical and an application viewpoint. Key element in this development has been the availability of open-source software tools, to stimulate scientific progress and collaboration. One of these tools was the Delft Object-oriented Radar Interferometric Software (DORIS), initiated and made available by the Delft University of Technology in the late nineties. Many researchers have worked with this software, and still are. Moreover, the DORIS software inspired the implementation of other interferometric software suits, such as ESA's SNAP toolbox.
Although the DORIS software is still used by researchers over the world on a daily basis, it also showed its limitations. Being originally designed for the processing of a single interferogram, on a single processing core, the scaling to stack processing required additional wrappers around the DORIS core. Moreover, the C++ implementation proved to be a hurdle for many researchers to contribute. Also, the adaption to other SAR acquisition modes, such as the Sentinel-1 TOPS mode, showed to be difficult.
These limitations stimulated us to develop a second-generation interferometric software suite: Radar Interferometric Parallel Processing Lab (RIPPL). RIPPL is fully implemented in Python3, commonly used in the scientific community, which hopefully will stimulate contributions in the further development of the code. The software is setup in a modular manner, enabling easy addition of new modules. Furthermore, RIPPL is designed to distribute its tasks over the processing cores available. The software can be used to download SAR data and precise orbits, apply radiometric calibration operations, perform the coregistration of a data stack, and generate output products, such as interferograms and coherence maps. Phase unwrapping can be performed via an interface with the SNAPHU software (the only non-python interface of the software). Output can be generated both in radar coordinates, or in any desired map projection, enabling easy integration with other data sources. Moreover, the contains modules to easily incorporate Numerical Weather Models (NWMs) in the processing.
Whereas various interferometric software tools already exist, the past showed that the co-existence of different software solutions stimulated science by inspiration and combination of ideas. This will also hold in the future, where new SAR satellite missions will be launched, possibly in combination with new acquisition modes. Early adaption of our software to these new data sets will stimulate the scientific pick-up. Therefore, we consider RIPPL software as a useful contribution to the scientific InSAR community.
In our contribution we will present the functionality of the RIPPL software, and show the results that can be generated based on various data sets.
The concept of smart mapping aims for the intelligent generation of informative maps enabling practitioners to understand and explore in a more efficient way the thematic information presented. Such concept has been already introduced in the geospatial analysis domain, yet not fully exploited in visualization schemes of Earth Observation (EO) findings. With the advent of platform-based EO solutions, such as the Geohazards Exploitation Platform (GEP), the access and processing of EO data has been downstreamed. The objective of such exploitation platforms is to contribute to the optimal use of EO data by simplifying the extraction of information. This allows focusing efforts on post analysis and interpretation of EO observations for improving our understanding of geohazard phenomena. Over the past years, it has been well demonstrated that hosted processing services are of major advantage when a rapid response to geohazards is addressed, involving strong earthquakes, volcanic eruptions, mass movements and river flooding. In fact, although the capabilities of EO platforms are being constantly upgraded, still advanced visualization options are rarely offered. In practice, the requirement for thematically tailored maps and visualization are often necessary to properly explore the EO findings. We introduce herein the idea of Smart EOMaps, a smart mapping functionality for platform-derived EO products based on data-driven intelligent styling and intuitive definition of map properties tailored to user requirements. The proper tuning of map properties in order to well demonstrate the thematic content in an illustrative way is a cumbersome procedure. This often relies on the experience and the background of each EO practitioner, while posing additional preparatory time before dissemination of the results. The intelligent visualization by the automated data analysis in a geospatial environment could better reveal the value of EO products and the discovery of potential “hidden” information to non-EO experts. Thus, the concept of Smart EOMaps aims to further contribute to promoting the exploitation and acceptance of EO services and products, as well as support decision making, especially when rapid response is required.
Geological risk studies are of great importance in the planning and management of the territory, but they are also essential to guarantee the safety of works and buildings located in inappropriate places. The field of study encompassed by geological hazards is varied and complex; highlights the modeling of landslides, the analysis of rock blocks falls, the identification of problems derived from progressive movements of the ground (creeping) and flood studies, among others. The socioeconomic impact derived from geological risks in Spain in recent years has produced alarming figures. Over the last few years, losses have totaled at least more than 5,000 million euros. Recent events in Spain are the Lorca earthquake (2011), the underwater volcanic eruption on El Hierro island (2011) or the recent one on La Palma island (2021), with considerable economic losses. Two very important projects in Spain, such as the Pajares railway tunnels (2009) and the Castor gas storage (2013), were ruined by unforeseen geological problems, such as the erroneous interpretation of hydrogeological conditions or not considering the induced seismicity. This produced millionaire extra cost in addition to possible environmental consequences. But the most worrying thing, without a doubt, is that most of them could have been avoided if the geological factors had been taken into account, which have been, in all the mentioned cases, the origin of the problems.
These recent experiences show that Geology has an important weight in the development of infrastructures, in the economy and in the environment, and that geological knowledge is essential to avoid the repetition of situations such as those that have occurred in recent years in Spain. It is essential to improve geological research by providing adequate means both in the case of infrastructure projects and in geological research. But, in addition, Geology contributes in a very positive way to the economic optimization of infrastructures and to the reduction of costs. In some cases, the geological-geotechnical reports guarantee the safety of the infrastructures that advocate a high risk of collapse, allowing their exploitation without any incident.
The contribution of geosciences to the economy, development and security of infrastructures, and to the prevention and mitigation of natural and environmental risks, is unquestionable and must be taken into account by public administrations. Within geosciences, satellite radar interferometry is a technique for observing the Earth from space that allows us to monitor our planet remotely using radar images acquired by radar sensors aboard artificial satellites orbiting the Earth. Using radar images from artificial satellites and using multi-temporal radar interferometry techniques, we can study the behavior of the terrain and detect possible deformations and structural damage that are occurring in any part of the planet where these satellite images are available and covered. The Copernicus program, thanks to the use of Sentinel-1, has exponentially increased the possibility of conducting these multi-temporal studies. In addition, the technique allows us to look back and study not only what is happening today but also how the terrain has been deformed since the beginning of the 90s thanks to the large data bank of radar images available from ERS-1/2 (1990-2000) and Envisat (2002-2010) satellites.
Until now, this technique, satellite radar interferometry, has had little application in geological risk studies in the province of Jaén (southern Spain), which will allow us to identify ground deformations in undetected or unknown potential geological risk areas. This study presents the work carried out in the province of Jaén using C-band radar images from Sentinel-1 and multi-temporal techniques of satellite radar interferometry to identify geological risk areas that help mitigate the damage that this supposes on the environment and society in general.
Floods are one of the most common disasters that can be triggered by hydro-meteorological hazards such as hurricanes, heavy rainfalls, rapid snowmelt, etc. With the recent proliferation of synthetic aperture radar (SAR) imagery for flood mapping due to its all-weather imaging capability, the opportunities to detect flood extents are growing compared to using only optical imagery. While flood extent mapping algorithms can be considered mature, flood depth mapping is still an active area of research even though water depth estimation is essential to assess the damage caused by a flood and its impact to infrastructure. In this regard, we have been working on development and validation of flood depths as a part of the HydroSAR project led by the University of Alaska Fairbanks. HydroSAR is a cloud-based SAR data processing service for rapid response and mapping of hydrological disasters. The water depth product at 30-meter resolution named WD30 is in preparation to be automatically generated leveraging Hybrid Pluggable Processing Pipeline (HyP3). To achieve that goal it has been validating for topographically different test sites.
To estimate water depth of a flooded area, the method utilizes a Height Above the Nearest Drainage (HAND) model and water masks generated from SAR amplitude imagery via the HydroSAR HYDRO30 algorithm. HAND is a terrain descriptor which is computed from a hydrologically coherent digital elevation model (DEM). The value of each pixel on a HAND layer represents the vertical distance between a location and its nearest drainage point. As the quality of a HAND model has a decisive impact on the calculated water depth estimates, a terrain model of high spatial resolution and accuracy is highly required to generate a reliable water depth map. In this study, to generate HAND we used the Copernicus GLO-30 DEM, which is a 30-meter global DEM released by the European Space Agency (ESA). The water height is adaptively calculated for each water basin by finding the best matching water extent given the water height and HAND.
In our presentation, we will show case studies of water depth mapping over Bangladesh related to a flooding event in 2020. SAR-based water masks from Sentinel-1 SAR amplitude imagery were generated through the HydroSAR implementation to be used as input. We generated WD30 products for the flood season in July using two different data sets. The accuracy of obtained water depth estimates were assessed by comparison with water level data from the Flood Forecasting and Warning Center-Bangladesh Water Development Board (BWDB). Before the statistical analysis of the comparison, adjustments for the different datums between WD30 estimates and reference water level have been carried out. R2 values for all dates from both data sets showed close to or larger than 0.8 with RMSE values of less than 2 m, which confirmed the flood depth estimates of WD30 were at the expected quality given the vertical accuracy of the input DEM.
Satellite EO is one of the most valuable tools for managing climate risk, and will play a key role in building the foundation for sustainable decision making. Over the past year, Sust Global has been part of the ESA ARTES 4.0 Business application program, delivering the project, “Sustainability Monitoring of Commodities using Geospatial Analytics (SMOCGEO)”. This project has driven machine learning-based transformations on data collections from active ESA space programs, enabling climate risk awareness and adaptation measures. Initial applications have targeted intelligence outcomes across the supply chains for global commodities, and further usage has the potential to help Ministries of Finance manage physical and transition climate risk.
Through their flagship Copernicus missions and sentinel satellites, ESA has pioneered earth observation programs, harnessing a rich catalogue of data on land surface, oceans and the atmosphere. Together, these analysis-ready datasets provide unique inputs to help monitor global climate change and sustainable operations. Sust Global has bridged these earth observation datasets with commercial applications focused on sustainability monitoring and climate intelligence, developing new climate adaptation measures. This will allow financial institutions and other users to include credible climate data in their decision making.
Through this project, Sust Global has explored the following:
• Climate Model validation: Validation and back testing of physical risk from climate change across multiple climate scenarios
• Metrics refinement: Define sustainability metrics derived from a combination of earth observation, emissions monitoring and projections from frontier climate models
• Summarized reporting: Heat mapping of high-risk nodes of operation within the supply chain of commodities with exposure to extreme climate peril
• Alerting and notifications: Near real time alerting based on near term climate risks and emissions exceeding thresholds validated using earth observation data
As we develop our capabilities in climate intelligence, we see the clear need for validation of projections from frontier climate models with reliable observations. Satellite-based observations of events and activities on the ground are valuable sources of reference for climate related hazards.
In addition to these mature offerings, developed over a decade of research, and development within the earth observation community, we see increasing application from the emergence of new data sources, in particular, Sentinel-5p. Through the L3 emissions profiling datasets, orthorectified area-averaged time series of nitrogen dioxide, methane and sulphur dioxide emissions from industrial sources is now possible. Bringing together these visible, multi-spectral and emissions profiles will enable us to uniquely monitor the sustainability of industrial operations across the globe.
Using ESA’s services and datasets, we are able to validate and back test projections of frontier models and model ensembles across different climate scenarios. Such validation builds confidence and provides a measure of tolerance on our forward-looking projections of climate hazards.
The SMOCGEO project began with exploring source sites for metal commodities. We found operations in metal commodities uniquely interesting due to their long time horizons, large sizes and isolated sources.
Through past and existing efforts like EO4SD, ESA has supported and pioneered the use of EO for sustainable development. SMOCGEO has brought innovation on the next wave of such vertical focused applications by bringing together the space derived observations with data from frontier climate science for global sustainability monitoring, exploiting the full potential of the Sentinel missions and building the foundation for climate disaster risk management.
Satellite interferometry is now a consolidated tool to monitor and detect ground movements related to geological phenomena or anthropic activity. The setup of regional and national Ground Motion Services has increased with the launch of free and open access Sentinel-1 satellites that provide regular acquisition worldwide. In a few months, the European Ground Motion service will provide a deformation map over Europe that will be annually updated. This huge amount of data, freely available to anyone, can be a valuable added information for land management, risk assessment, and a wide range of users. In order to squeeze the potentiality of these data, tools and methodologies to generate secondary products for a more operational use are needed. Here we propose a method to map, from a PSI deformation map at global scale, the degree of spatial gradients of displacement to distinguish areas where damages to structures and infrastructures are more probable to occur. The method is based on the concept that a structure exposed to differential settlements is more prone to suffer damages or destruction. Starting from the detection of the most significant Active Deformation Areas (ADA), with the existing ADATools, we generate three different outputs, which are strictly related: a) the spatial gradient map, to have an information about where more or less damages are expected; b) the time series of local gradients, to see the history in time of the gradients, important to know the temporal evolution in the past and to keep the monitoring; and c) the potential damage map, which is a map with the existing structures classified on the basis of the potential damage. We present the results over the coastal area of Granada County, strongly affected by slope instabilities. A field survey has been carried out to map the actual damages in some residential areas where movement has been detected. The damages mapped in the field will be showed and compared with the outputs of the methodology. This work has been developed in the framework of Riskcoast, an ongoing project financed by the Interreg Sudoe Program through the European Regional Development Fund (ERDF).
In the aftermath of flood disasters, (re-)insurance has to make critical decisions about activating emergency funds in a timely manner. Rebuilding efforts rely on appropriate payouts of insurance policies. A fast assessment of flood extents based on EO data facilitates decision making for large scale floods.
The local risk of damaging assets through floods is an essential information to set flood insurance premiums appropriately to allow both fair coverage and sustainability of the financing of the insurance. Long historic archives of EO data can and should be exploited to provide (re-)insurance with a solid risk analysis and validate their catastrophe models for high-impact events.
Flood Segmentation in optical images is often hindered by the presence of clouds. As a consequence a substantial volume of optical data is disregarded and excluded from risk analysis. We seek to address this problem by applying machine learning to reconstruct floods in partially clouded optical images. We present flood segmentation results for cloud-free scenarios and an analysis of the resulting algorithm’s transferability to other geographic locations. For our investigation we use freely available satellite imagery from the Copernicus programme. In conjunction DEM based data is used which forms the backbone to address the issue of cloud presence at a later stage.
The Sentinel-2 mission comprises a constellation of two identical polar-orbiting satellites with a revisit time of five days at the equator. For our study we use all bands available at 10 meters and 20 meters resolution which covers RGB, and various Infrared wavelengths. All Sentinel inputs are atmospherically corrected by either choosing Level-2A images or using SNAP for preprocessing.
The Copernicus Digital Elevation Model (DEM) with global coverage at 30 meter resolution (GLO-30m) is provided by ESA as a dataset openly available to any registered user.
From the DEM, additional quantities can be derived that support the identification of possibly flooded areas. The slope of the terrain helps understanding the flow of water. Flow Accumulation helps the delineation of the flooded shorelines, supporting the algorithm in filling up the DEM according to the location in which water is accumulated i.e., cells characterized by high values in the flow accumulation grid. The Height Above Nearest Drainage (HAND) is a drainage normalized version of a DEM. It normalizes topography according to the local relative heights found along the drainage network, and in this way, presents the topology of the relative soil gravitational potentials, or local draining potentials. It has been demonstrated to show a high correlation with the depth of the water table. The Topographic Wetness Index (TWI) is a useful quantity to estimate where water will accumulate in an area with elevation differences. It is a function of slope and the upstream contributing area i.e., flow accumulation.
We distinguish two scenarios for which the reference data is created differently while there is no change in input data preparation. The first case is for segmenting permanent waters for which the reference data is directly extracted from Open Street Maps (OSM). Second is the case of real floods where flood experts are manually labelling the flood extent.
The study uses a combination of two popular neural network architectures to achieve two different purposes. Most importantly, a U-Net architecture is set up to address the image segmentation task. U-Net is, especially in remote sensing, a very popular architecture for this task. Initially the input goes through a sequential series of convolution blocks, that consist of repeated convolutions followed by ReLU layers and downsamplings (max pooling), comparable to conventional LeNets. At the end of these iterations, the operations are reverted via deconvolutions and upsamplings, while additionally, the convolutional layers are concatenated. This is repeated until the original image shape is achieved, and optimization is performed to minimize loss over the entire scene. We extend this architecture by ingesting a Squeeze and Excitation block prior to the U-NET block. This block has the purpose of deriving importance weights for the input channels, e.g. the Sentinel-2 as well as the DEM and its derivative bands, which are then used to estimate the importance of sensors via their contribution to the output. The squeezing works through condensing the previous feature map (or in our case, input data), per channel into a single element via a global max pooling operation. A series of a fully connected, ReLU, and another fully connected layer (with number of channels output), followed by a sigmoid, is then used in the excitation part to multiplicate, in effect to weigh, the input features. This vector of weights can be interpreted as a measure of feature importance, aside from its positive effects on model accuracy. We hence propose a measure to validate the importance of different input datasets, which can as well be visualized and correlated with different landscapes or surface features.
Our entire pipeline is set up within the Microsoft Azure cloud to provide scalability and computational efficiency. We have created two pipelines, one for model training and validation, which also serves to enable retraining and future transferability; and a second pipeline to conduct inference.
Our work focuses on two study sites, Balaton lake in Hungary and Thrace in Greece. The study site in Hungary contains rivers, lakes and urban areas which represent a good diversity in features to be expected in a flood scene. Only permanent waters are being mapped in the Balaton case. The Greek case consists of a river flood that took place on 27th of March 2018. The test set is created with manual labels from the Greek case while the Balaton OSM data is used for additional training data and a preliminary study on purely permanent water scenarios.
Within the FloodSENS project we have long term goals of global operability. For this reason our training and test datasets associated with different AOI’s, are organized to enable a trackable creation of various models, e.g. to fulfill global or regional operability. Our data structure is organized in a modular fashion to facilitate all this, yet at the current stage we provide accuracy metrics on the level of the introduced distinct case studies. I.e., models are trained to specifically optimize outcome based on training and test data from these AOI’s. The proposed network yields meaningful accuracies for the separation of water and non-water areas, while in general the separation of permanent and non-permanent (flood) waters, without the assistance of auxiliary data, remains challenging.
Our current investigations into the weighting produced by the SENet blocks offers clear indications and patterns based on landscapes, into which sensors play a role under which terrain conditions. We quantify the significant advantage of Sentinel-2 over the DEM-based products, at least within a cloud-free scenario. We can further showcase the relevance on the level bands and channels, given indication on the usefulness of deriving different DEM metrics, such as slope, terrain roughness etc. in terms of assisting the flood mapping effort.
FLOodwater Mapping PYthon toolbox (FLOMPY) is an automatic, free and open-source python toolbox for the mapping of floodwater. An enhancement of FLOMPY related to the mapping of agricultural damaged regions from floods is presented. FLOMPY requires only a specified time of interest related to the flood event and geographical boundaries. The products of the FLOMPY consists of a) a binary mask of floodwater b) Delineated agricultural fields and c) damaged cultivated agricultural fields.
For the production of the binary mask of floodwater, the toolbox exploits the high spatial (10m) and temporal (6 days per orbit over Europe) resolution of Sentinel-1 GRD. Τhe delineation of the crop fields is based on an automated extraction algorithm using pre-flood Sentinel-2 multitemporal (optical) data. Sentinel-2 dataset were considered due to their high spatial (10m) and temporal (~5 days) resolution. In order to extract the damaged cultivated agricultural field information, vegetation and soil moisture information were used. In particular, for each delineated crop field, multitemporal vegetation and soil moisture indices from Sentinel-2 dataset were calculated. Then, according to the temporal behaviour of the indices each crop field was classified as “cultivated” or “not-cultivated”.
In this study, we present one case study related to the “Ianos” Mediterranean tropical-like cyclone over an agricultural area in central Greece. The “Ianos” cyclone took place from 14th to 19th of September 2020 and caused a lot of damage over several places in central Greece. We focus on an agricultural area of 325 km2 near Palamas where a lot of casualties were reported. The binary mask of the floodwater is extracted by exploiting Sentinel-1 intensity time series using FLOMPY`s functionalities. Delineated agricultural fields are extracted using a 3-month pre-flood Sentinel-2 dataset. The detection of flood-affected cultivated agricultural fields yielded satisfactory results based on a validation procedure using visual interpretation.
Floodwater, agricultural field, and flood-affected cultivated cropland maps can support a number of organizations related to agricultural insurance, food security, agricultural/water planning, natural disaster assessment and recovery planning. Overall, the end-user community can benefit by exploiting the proposed methodological pipeline by using the provided open-source toolbox.
In August 2020, McKenzie Intelligence Services was awarded EUR 685,000 in co-funding by the European Space Agency (ESA) Space Solutions to build and deliver a digital platform, the Global Events Observer (GEO) for the insurance industry, enabling the further collection and use of highly accurate, geotagged external data from a range of sources to provide early warnings of loss events.
Launched to the market in 2021, GEO directly addresses the needs of the re/insurance sector, providing a digital solution which delivers better real-time intelligence and analysis on damage post catastrophic events. By automating the collection and analysis of combined space and ground-based sensing capabilities, the collaboration with ESA is intended to enable the global tracking of catastrophe timelines and delivery of user specific reports for Exposure, Claims management and Claims Reinsurance users in a scalable way. Essentially the market is looking for very early data from catastrophe or other loss events, delivered at far higher quality and quicker than it has been able to access before through the intelligence application and fusion of insurance and event data.
At the Living Planet Symposium 2022, Mckenzie Intelligence Services Founder and CEO Forbes McKenzie can update the market on the next steps for EO and risk transfer, in particular the paradigm shift in the way the EO service offer has evolved, the technologies being deployed and the applications in insurance - including the development and mainstreaming of pioneering parametric insurance policies that are targeted at closing the global protection gap.
Via GEO, MIS is leading the paradigm shift in harnessing geospatial data for the insurance used case, enabling decision-as-a-service on demand and at scale.
GEO amalgamates highly accurate geotagged data from a range of sources to identify and track damage to property and infrastructure caused by catastrophic events such as natural disasters, allowing insurers to better serve their clients in their time of need
From 1970 to 2019, weather, climate and water hazards accounted for 50% of all disasters and 74% of all reported economic losses according to the World Meteorological Organization.
MIS understands the need for accurate, near real-time reactive data and is working on improving GEO for our clients daily.
Agricultural insurance focuses on modeling the risk of crop yield damages due to natural or other disasters such as: hail, heavy raining, flooding, extreme temperature, wind storms, droughts, etc. The Geo Insurance (GI - https://app.geo-insurance.com/#!/login) focuses on modeling with deep learning the risk of crop yield loss using Earth Observation & meteorological data bundled with blockchain technology to ensure the transparency of the whole process from data to client information and reduction of administrative costs & process through automated verification.
Crop insurance is the farmer’s most important risk management tool. With uncertainty in crop production constantly looming, insurance is something secure; a safety net for the unpredictable. Profit margins for agri insurance companies are becoming narrower over time due to insurers needing to react faster and more precisely to the volatility and advancements in agriculture. Agri-tech is continually evolving and it’s essential that crop insurers evolve with it to remain reliable and progressive. Providing clients with advanced data and views of their fields is no longer a luxury, but an expectation that insurers use technology to help manage risk. Satellite imagery combined with meteorological datasets and models can help by increasing operational efficiencies, managing exposure to risk and providing substantiated validation. Currently climate change is happening, food security is becoming a global threat, thus the need was identified to create a platform that delivers information that helps create a sustainable, transparent, efficient and scalable agri-insurance market.
Insurers are always looking for ways to increase efficiencies and lower operating costs, especially since farmers are constantly relying on the output. Satellite technology provides both historical and current perspectives of any local conditions versus needing to send team members out to scout every area. Imagine rather than having a new field to scout and having no reliable background, you have access to 30 years of reputable, historical field data. With this information, insurers can prioritize the necessary field visits and simplify the claims management process. The data also enables you to quickly and confidently validate loss adjustment claims with reliable third-party data.
In its recent history, the EU Common Agricultural Policy (CAP) has undergone several reforms towards greater market orientation, shifting from production support to mainly decoupled payments and less public intervention. It must be noted that crop insurance is obligatory for farmers receiving EU subsidies. This shifting had as a result public-private partnerships to be created all over Europe in the Agriculture Insurance Industry. This model is similar to the US and the rest of the world, hence there is a great pool of targeted customers that include well-established Insurance Providers of both Public and Private domain, Underwriting Agents and Brokers.
Satellite imagery provides advanced levels of insight so insurance companies are able to plan and manage the financial risk. Satellite technology, including remote sensing images and meteorological data, allows to receive near real-time updates of anything that’s occurring in the clients’ fields, providing the ability to monitor any severe weather conditions and properly manage cash flow to pay eventual indemnities, versus waiting to react to situations. Satellite imagery is also able to cover larger areas in less time to provide a complete overview of one’s fields. There are many factors affecting yield and plant growth and many ways to measure them, however the best information comes from the plants themselves — think of them as tens of thousands of sensors per hectare. Satellite imagery is used to give the user an accurate and unbiased view of a crop’s status and potential, and therefore your business potential, by analyzing information directly from the plants. Satellite imagery can provide a greater view of agri business and what’s occurring in the client’s fields.
GEO University developed a platform that delivers information that helps create a sustainable, transparent, efficient and scalable agro-insurance market: GI. Taking into consideration the current situation in the agro-insurance domain, GI focuses on delivering insights and data that assist insurance companies and underwriters in a more efficient evaluation of claims, as well as risk mitigation. This will be achieved by providing meteorological information, alerts and historical insights to insurance companies and underwriters for a more efficient evaluation of claims and risk mitigation.
Key part in the crop insurance market is the proper underwriting procedures an insurance company undertakes in order to accurately estimate the risks involved in the insurance contracts. The crop insurance contracts involve many parameters and insure a wide range of objects (agricultural machinery, farm infrastructure), people (health and accidents) and crops (yield production loss). The main problem GI is addressing focuses on the crop aspect of the insurance contracts. Using satellite data (remote sensing and meteorological) and models, bundled with artificial intelligence, GI produces accurate and localized underwriting information for specific natural disasters that insurance companies insure: Overheat, Frost, Extreme precipitation, Windstorm, Snow coverage, Floods, and Droughts. Apart from these risks, GI also provides analytical historical climate variables and indices utilizing Copernicus datasets, which are downscaled to local level in order to provide information at parcel level.
The second part of the problem GI addresses is part of the damage verification process. After a disaster happens, claims from the farmers start flushing the insurance companies. The insurance company faces a big challenge at that time, they need to prioritize the claims for their validity and perform damage assessment for the contracts. Thus the insurance company must within a short period of time examine hundreds or thousands contracts (depending on their client base and spatial distribution). GI helps insurance companies initially to prioritize the claims, i.e. to monitor with satellite technologies which specific parcels are affected by the disaster. This solves a huge administrative burden of the company. For specific cases like floods, GI can also perform an estimate of damage, information that can optimize the way claims are handled by the insurance company
The GI platform is an online platform for insurance companies and farmers. Insurance companies a.) estimate the risk of insuring new customers with their crops, b.) understand the risk and generate scenarios for your existing clients, c.) prioritise the claims and assess damages remotely, while minimizing field inspections. Farmers a.) get the risk of their cultivated crops at farm level, b.) get alerts for potential natural disasters to plan their actions, c.) generate damage reports with objective measures. The generated value of the GI is to disrupt the way insurance companies understand and value their risks. Create a fair, transparent and affordable system to calculate crop insurance fees.
In this summer a long time stationary rain event struck parts of western Germany leading to massive floodings – especially in the valley of the Ahr approximately 20 km south of Bonn. Such long-term stationary weather conditions get actually more and more frequent and can lead to long extreme heat or massive continous rainfall as shown in a study of the Potsdam-Institut für Klimafolgenforschung (PIK) this year.
The flood of the Ahr revealed that the existing modelling for flood probabilities is not sufficient. Possible causes may be the comparatively short observation period of the underlying measurements, missing historical data or the dynamics of climate change are not taken into account. For this reason, our approach is based on simulations of individually adapted worst case scenarios to derive possible effects of heavy rainfall more generally and over a wide area.
In the last years we developed a methodology for classification of strong rain dangers depending only on the terrain. We calculated strong rain danger maps covering hole Germany and Austria estimating a worst case scenario by not taking into account local drains since those are mostly blocked by leaves and branches at such sudden events. But these maps are only based on the influence of the direct surrounding in strong rain events and do not consider water coming from other areas. So we developed an additional component for including water-run-off from up-stream areas.
In the presented study we calculate the maximum run-off for a whole water catchment area assuming a massive strong rain event and the following flash flood. For each position in the run-off-map a local height profile perpendicular to the flow direction is calculated and filled up with the maximum estimated water volume at this position. So cross sections along a river in a valley giving a maximum water level for the maximum possible run-off for a given strong rain event are derived.
Since some part of the rain will drain away and not contribute to the run-off this is also a worst case estimation. The results are compared to aerial imagery acquired on 2021-07-16 – two days after the flooding struck the Ahr valley –, flood-masks derived from Sentinel-1 imagery and Copernicus damage assessment maps. Based on this imagery and measurements and estimations of water gauge levels we calculate the effective rain-height of the catchment and the simulation is calibrated and adapted to the observed water levels. Based on these results we can derive also an estimation of the flooding situation in the whole catchment area including tributary valleys.
Information about ice sheet subsurface properties is crucial for understanding and reducing related uncertainties in mass balance estimations. Key parameters like the firn density, stratigraphy, and the amount of refrozen melt water are conventionally derived from in situ measurements or airborne radar sounders. Both types of measurements provide a great amount of detail, but are very limited in their spatial and temporal coverage and resolution. Synthetic Aperture Radars (SAR) can overcome these limitations due to their capability to provide day-and-night, all-weather acquisitions with resolutions on the order of meters and swath widths of hundreds of kilometers. Long-wavelength SAR systems (e.g. at L- and P-band) are promising tools to investigate the subsurface properties of glaciers and ice sheets due to the signal penetration of up to several tens of meters into dry snow, firn, and ice. Understanding the relationship between geophysical subsurface properties and the backscattered signals measured by a SAR is ongoing research.
Two different lines of research were addressed in recent years. The first is based on Polarimetric SAR (PolSAR), which provides not only information about the scattering mechanisms, but also has the uniqueness of being sensitive to anisotropic signal propagation in snow and firn. The second is related to the use of interferometric SAR (InSAR) to retrieve the 3D location of scatterers within the subsurface. Particularly multi-baseline InSAR allows for tomographic imaging (TomoSAR) of the 3D subsurface scattering structure.
So far, the potential of the different SAR techniques was only assessed separately. In the field of PolSAR, modeling efforts have been dedicated to establish a link between co-polarization (HH-VV) phase differences (CPDs) and the structural properties of firn [1]. CPDs have then been interpreted as the result of birefringence due to the dielectric anisotropy of firn originating from temperature gradient metamorphism. Moreover, the relation between the anisotropic signal propagation and measured CPDs depends on the vertical distribution of backscattering in the subsurface, e.g. generated by ice layers and lenses, which defines how the CPD contributions are integrated along depth. Up to now, assumptions of density, firn anisotropy, and the vertical backscattering distribution were necessary to invert the model, e.g. for the estimation of firn thickness [2]. However, the need for such assumptions can be overcome by integrating InSAR/TomoSAR techniques.
In the fields of InSAR and TomoSAR for the investigation of the ice sheet subsurface, recent studies are mainly concerned with the estimation of the vertical backscatter distribution, either model-based or through tomographic imaging techniques. InSAR models exploit the dependence of the interferometric volume decorrelation on the vertical distribution of backscattering. By modeling the subsurface as a homogeneous, lossy, and infinitely deep scattering volume, a relation between InSAR coherence and the constant extinction coefficient of the microwave signals in the subsurface of ice sheets was established in [3]. This approach approximates the vertical backscattering distribution as an exponential function and allows the estimation of the signal extinction, which is a first, yet simplified, indicator of subsurface properties. Recent improvements in subsurface scattering modeling [4], [5] showed the potential to account for refrozen melt layers and variable extinctions, which could provide information about melt-refreeze processes and subsurface density. With TomoSAR, the imaging of subsurface features in glaciers [6], and ice sheets [5][7][8] was demonstrated. Depending on the study, the effect of subsurface layers, different ice types, firn bodies, crevasses, and the bed rock (of alpine glaciers) was recognized in the tomograms. This verified that the subsurface structure of glaciers and ice sheets can result in more complex backscattering structures than what is accounted for in current InSAR models. SAR tomography does not rely on model assumptions and can, therefore, provide more realistic estimates of subsurface scattering distributions.
This study will address a promising line for future research, which is the combination of PolSAR and InSAR/TomoSAR approaches to fully exploit their complementarity and mitigate their weaknesses. As described above, on the one hand, PolSAR is sensitive to the anisotropic signal propagation in snow and firn, even in the absence of scattering, but provides no vertical information. On the other hand, InSAR (models) and TomoSAR allow assessing the 3-D distribution of scatterers in the subsurface, but provide no information on the propagation through the non-scattering parts of firn.
In a first step, an estimation of firn density was achieved by integrating TomoSAR vertical scattering profiles into the depth-integral of the PolSAR CPD model [9]. This approach is in an early experimental stage with certain limitations. The density inversion can only provide a bulk value for the depth range of the signal penetration and measurements at several incidence angles are required to achieve a non-ambiguous solution. Furthermore, multi-baseline SAR data for TomoSAR are currently only available from a few experimental airborne campaigns. Finally, the density estimates have to be interpreted carefully, since the underlying models are (strong) approximations of the real firn structure. This could be addressed in the future by an integration with firn densification models.
Nevertheless, this combination of polarimetric and interferometric SAR techniques provides a direct link to ice sheet subsurface density, without parameter assumptions or a priori knowledge, and the first density inversion results show a promising agreement with ice core data [9].
This contribution will present first results of the density inversion, discuss its limitations and will show investigations towards a more robust and wider applicability. One aspect will be the use of InSAR model-based vertical scattering profiles instead of TomoSAR profiles, which reduces the requirements on the observation space and increases the (theoretical) feasibility with upcoming spaceborne SAR missions.
[1] G. Parrella, I. Hajnsek and K. P. Papathanassiou, "On the Interpretation of Polarimetric Phase Differences in SAR Data Over Land Ice," in IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 2, pp. 192-196, 2016.
[2] G. Parrella, I. Hajnsek, and K. P. Papathanassiou, “Retrieval of Firn Thickness by Means of Polarisation Phase Differences in L-Band SAR Data,” Remote Sensing, vol. 13, no. 21, p. 4448, Nov. 2021, doi: 10.3390/rs13214448.
[3] E. W. Hoen and H. Zebker, “Penetration depths inferred from interferometric volume decorrelation observed over the Greenland ice sheet,” IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 6, pp. 2571–2583, 2000.
[4] G. Fischer, K. P. Papathanassiou and I. Hajnsek, "Modeling Multifrequency Pol-InSAR Data from the Percolation Zone of the Greenland Ice Sheet," IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 4, pp. 1963-1976, 2019.
[5] G. Fischer, M. Jäger, K. P. Papathanassiou and I. Hajnsek, "Modeling the Vertical Backscattering Distribution in the Percolation Zone of the Greenland Ice Sheet with SAR Tomography," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 11, pp. 4389-4405, 2019.
[6] S. Tebaldini, T. Nagler, H. Rott, and A. Heilig, “Imaging the Internal Structure of an Alpine Glacier via L-Band Airborne SAR Tomography,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7197–7209, 2016.
[7] F. Banda, J. Dall, and S. Tebaldini, “Single and Multipolarimetric P-Band SAR Tomography of Subsurface Ice Structure,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 5, pp. 2832–2845, 2016.
[8] M. Pardini, G. Parrella, G. Fischer, and K. Papathanassiou, “A Multi-Frequency SAR Tomographic Characterization of Sub-Surface Ice Volumes,” in Proceedings of EUSAR, Hamburg, Germany, 2016.
[9] G. Fischer, K. Papathanassiou, I. Hajnsek, and G. Parrella, “Combining PolSAR, Pol-InSAR and TomoSAR for Snow and Ice Subsurface Characterization,” presented at the ESA POLinSAR Workshop, Online, Apr. 2021.
There is growing interest in surface water bodies across Antarctic ice shelves as they impact the ice shelf mass balance. The filling and draining of lakes have the potential to flex and fracture ice shelves, which may even lead to their catastrophic break up. The study of ice shelf surface lakes typically uses optical satellite imagery to delineate their areas and a parameterised physically based light attenuation algorithm to calculate their depths. This approach has been used to calculate ponded water volumes and their changes over seasonal and inter annual timescales. The approach has been developed and validated using various in-situ data sets collected on the Greenland Ice Sheet, but so far, the approach has not been validated for Antarctic ice shelves. Here we use simultaneous field measurements of lake water depths made using water pressure sensors, and surface spectral properties made with fixed four channel radiometers (red, blue, green, panchromatic), to parameterise the light attenuation algorithm for use during the filling and draining of shallow surface lakes on the McMurdo Ice Shelf, Ross Sea Sector, Antarctica during the 2016/17 summer. We then apply the approach to calculate lake areas, depths and volumes across several surface water bodies observed in high resolution Worldview imagery and their changes over time. These calculations are used, in turn, to help validate the approach to calculating water volumes across the entire ice shelf using Sentinel-2 and Landsat 8 imagery. Results suggest that using parameter values relevant to the Greenland Ice Sheet may bias the calculation of water volumes when applied to Antarctic ice shelves, and we offer values that may be more appropriate. Furthermore, calculations of lake volume using Sentinel-2 and Landsat 8 imagery maybe underestimated when compared to the higher resolution Worldview imagery. The findings have implications for the calculation of water volumes across other ice shelves.
Arctic land ice is responding to anthropogenic climate heating through increased surface ablation and less well constrained dynamical flow processes. Nevertheless, the magnitude of committed loss and the lower bound of future Sea Level Rise (SLR) remains unresolved.
Here, we apply a well-founded theory to determine Arctic ice committed mass loss and SLR contribution. The approach translates observed ice mass balance fluctuations into area and volume changes that satisfy an equilibrium state. Ice flow dynamics are accounted implicitly via volume to area scaling. For our application, the key data requirements are met with 2017 to 2021 (5 year) inventories of regional Arctic: 1) mass balance from GRACE gravimetry; 2) the Accumulation Area Ratio (AAR) defined as the area with net mass gain divided by the total glacierized area, retrieved from Sentinel-3 optical imagery.
For seasonally ablating grounded ice masses, the maximum snowline altitude reached at the end of each melt season marks the transition between the lower bare ice and the upper snow accumulation areas. This equilibrium line conveniently integrates the competing effects of mass loss from meltwater runoff and gain from snow accumulation. Crucially, the regression property where mass balance is zero defines the time- and area-independent Equilibrium Accumulation Area Ratio (AAR0). The ratio AAR / AAR0 yields the fractional imbalance (α) that quantifies the area change required for the ice mass to equilibrate its shape to the climate that produced the observed AAR0. The resulting derivation for the adjustments in ice volume (ΔV) and committed eustatic SLR follow from glaciological area-volume scaling theory. The approach exploits how surface mass balance perturbations are at least an order of magnitude faster than the associated dynamic adjustment. Whilst the theoretical basis and derivation of ice area-volume scaling analysis applies equally to all terrestrial ice masses, independent of size, it has previously not been applied to determine all Arctic ice disequilibria.
The considered regions are Greenland; Arctic Canada North, Svalbard, Iceland and the Russian High Arctic Islands.
The Antarctic Ice Sheet is losing mass and contributing to global sea level rise at an accelerated pace. The grounding line plays a critical role in this process, as it represents the location where the ice detaches from the bedrock and floats in the ocean, which is important for the accurate determination of ice discharge from the grounded ice sheet. Accelerated mass loss from the Antarctic Ice Sheet is, in part, due to grounding line retreat. Therefore, accurate knowledge of the grounding line location and its migration over time is valuable in understanding the processes controlling mass balance, ice sheet stability and sea level contributions from Antarctica.
As a subglacial feature, the grounding line location is difficult to survey directly. However, satellite observable grounding zone features can be used as a proxy for the grounding line. Multiple Earth Observation techniques have been used to map the Antarctic grounding zone, including the Differential Synthetic Aperture Radar Interferometry (DInSAR), ICESat laser altimetry repeat-track analysis, CryoSat-2 radar altimetry crossover analysis, and brightness-based break-in-slope mapping from optical images. These methods, however, are limited by either spatial-temporal coverage or accuracy. The high-resolution laser altimetry satellite ICESat-2 has the potential to map the Antarctic grounding zone with improved coverage and accuracy. This provides a new opportunity to investigate grounding line changes and their relationship to ice dynamics at a finer resolution in both space and time.
Here we first present a new methodological framework for mapping three grounding zone features automatically from ICESat-2: the landward limit of tidal flexure Point F, the inshore limit of hydrostatic equilibrium Point H and the break-in-slope Point I_b. We then present a new high-resolution grounding zone product by applying this method to the whole Antarctic Ice Sheet. We discuss the sensitivity and accuracy of our approach by comparing with historic and contemporaneous grounding zone products. Based on this new ICESat-2-derived grounding zone product, we investigate grounding zone migration behaviour in key regions of the Antarctic Ice Sheet.
The Greenland and Antarctica ice sheets are major and increasingly important contributors to global sea level rise through the melting of their ice masses. Thus, monitoring and understanding their evolution is more important than ever. However, the understanding of the ice sheet melt is hindered by limitations in current observational melt products. Traditional observational products from satellite microwave sensors report only the top layer surface melt and do not convey information on deeper melt/refreeze processes, due to the relatively high frequency used in the retrievals. The solution is to use multi-frequency observations from L-band (1.4 GHz) to Ka-band (37 GHz) available from spaceborne microwave radiometers which allows for the retrieval of melt water profiles; whereby, the emission at higher frequencies originates from shallow surface layers, while the emission at lower frequencies originates from greater depths and consequently is influenced by seasonal melt water in a thicker surface layer.
We simulated brightness temperatures at 1.4, 6.9, 10, 19 and 37 GHz with the MEMLS (Microwave Emission Model of Layered Snowpacks) emission model with liquid water content (LWC) profiles modeled for the DYE-2 experimental site in Greenland with an energy balance model calibrated with in situ temperature and snow wetness profiles. MEMLS was run using the same snow density and temperature profiles as the energy balance model, but some of the snow structural parameters were adjusted so that the simulated TB values corresponded to the values measured by the SMAP (1.4 GHz) and AMSR2 (6.9, 10, 19 and 37 GHz) microwave radiometers during frozen conditions. Energy balance model predicted LWC and temperature profile time series during the melt season were then used in MEMLS to predict brightness temperature time series over the same period. Simulated and measured brightness temperatures show reasonable agreement, demonstrating that the observations carry information on the melt evolution at different depths. The results also show that TB measurements can be inverted into LWC profiles. The inversion process can be applied to the twice daily continent scale measurements available from satellite instruments to map LWC profiles and track melt evolution in different layers of the ice sheet. We present the most recent results of this analysis and opportunities for continued research and applications. The results are particularly relevant in light of the development of the Copernicus Imaging Microwave Radiometer (CIMR), which will make measurements at these same frequencies.
In the past few decades, the Greenland and Antarctic Ice Sheets have been major contributors to global sea level rise and with accelerated ice loss rates, they correspond to the worst-case scenario of global warming in the latest IPCC reports. The predicted sea level rise lies in the range of 15 to 23 cm by the end of the century, as per the reports, which clearly indicates the important need to track the ice loss, as its projections affect millions of people currently living in coastal areas.
This study fits into the work being carried out to better project the global sea level contributions of the ice sheets on different timescales. The aim of this study is to isolate the signal in satellite altimetry records that is attributable to changes in ice flow. Since the 1990s, satellite altimetry missions have helped in monitoring the changes in the shape of ice sheets. Two main processes account for these changes the ice sheets undergo: surface mass balance changes (accounting for precipitation and ablation)and changes in ice flow (accounting for ice discharge at glacier termini), the latter of which is also referred to as ice dynamical imbalance. The surface mass balance estimates are modelled using a regional climate model with the help of meteorological records. By combining these modelled estimates with the altimetry records, it is possible to separate the ice dynamical imbalance. To obtain a detailed pattern of the dynamical imbalance across both the ice sheets, we would be using this approach and further refining it, perhaps by accounting for variability in snow and ice densities, their impact on measured ice thickness. In the end, this study would help us track the changes in glaciology of the regions and their evolution in the changing climate, being able to associate the various events on different timescales to the dynamical imbalance of the ice sheets by quantifying them, which in turn could be useful in ice sheet modelling efforts and coming up with robust sea level projections.
The disintegration of the eastern Antarctic Peninsula’s Larsen A and B ice shelves has been attributed to regional-scale atmosphere and ocean warming, and increased mass-losses from the glaciers once restrained by these ice shelves have increased Antarctica’s total contribution to sea-level rise. Abrupt recessions in ice-shelf frontal position presaged the break-up of Larsen A and B, yet, in the ~20 years since these events, documented knowledge of frontal change along the entire ~1,400 km-long eastern Antarctic Peninsula is limited. Here, we show that 85% of the seaward ice-shelf perimeter fringing this coastline underwent uninterrupted advance between the early 2000s and 2019, in contrast to the two previous decades. These observations are derived from a detailed synthesis of historical (including DMSP OLS, ERS-1/2, Landsat 1-7, ENVISAT) and new, high temporal repeat-pass (Landsat 8, Sentinel-1a/b, Sentinel-2a/b) satellite records. By comparing our observations with a suite of state-of-the-art ocean reanalysis products, we attribute this advance to enhanced ocean-wave dampening, ice-shelf buttressing and the absence of sea-surface slope-induced ice-shelf flow, all of which were enabled by increased near-shore sea ice driven by a Weddell Sea-wide intensification of cyclonic near-surface winds since c. 2002. Collectively, our observations demonstrate that sea-ice change can either safeguard from, or set in motion, the final rifting and calving of even large Antarctic ice shelves.
Three decades of routine Earth Observation have revealed the progressive demise of the Antarctic Ice Sheet, evinced by accelerated rates of ice thinning, retreat and flow. These phenomena, and those pertaining to ice-flow acceleration, especially, are predominantly constrained from temporally limited observations acquired over inter-annual timescales or longer. Whereas ice-flow variability over intra-annual timescales is now well documented across, for example, the Greenland Ice Sheet, little-to-no information exists surrounding seasonal ice-flow variability in Antarctica. Such information is critical towards understanding short-term glacier dynamics and, ultimately, the ongoing and future imbalance of the Antarctic Ice Sheet in a changing climate.
Here, we use high spatial- and temporal- (6/12-daily) resolution Copernicus Sentinel-1a/b synthetic aperture radar (SAR) observations spanning 2014 to 2020 to provide evidence for seasonal flow variability of land ice feeding the climatically vulnerable George VI Ice Shelf (GVIIS), Antarctic Peninsula. Between 2014 and 2020, the flow of glaciers draining to GVIIS from Palmer Land and Alexander Island increased during the austral summer (December – February) by ~0.06 m d⁻¹ (22 m yr⁻¹). These observations exceed prescribed (root median square) error limits totalling ~0.02 m d⁻¹ (7.5 m yr⁻¹). This variability is corroborated by independent observations of ice flow as imaged by the Landsat 8 Operational Land Imager that are not impacted by firn penetration and other effects known to potentially bias SAR-derived velocity retrievals over monthly timescales or shorter. Alongside an anomalous reduction in summertime surface temperatures across the Antarctic Peninsula since c.2000, differences in the timing of ice-flow speedup we observe between the Palmer Land and Alexander Island glaciers implicate oceanic forcing as the primary control on this seasonal signal.
Here, we present early results from a new approach to mapping the grounding lines (GLs) of the Greenland ice sheet's (GrIS) floating ice tongues, using high-resolution digital elevation models (DEMs).
Greenland's floating ice tongues represent a key interface through which the ice sheet interacts with its surrounding oceanic and atmospheric environment. The grounding line, which is defined as the juncture between grounded and floating ice, is a key parameter in ice sheet research, and an essential component of multiple previous studies which have focused on ice tongue supraglacial lake dynamics, sediment transport, and vulnerability to climate change. Reliable and precise knowledge of the GL location is fundamental to understanding the geometry and evolution of these sensitive components of the ice sheet, yet is notoriously difficult to accurately measure.
In previous research, GLs have been estimated using techniques such as terrestrial radar interferometry, interferometric synthetic aperture radar, and digital elevation modelling. Compared to recent datasets and techniques, the spatial resolution and temporal sampling of these methods are relatively low, with most exhibiting a spatial resolution of > 25 metres and infrequent return periods. These factors limit the precision with which the GL can be estimated and introduce uncertainty relating to the stability of present-day ice tongues. As a result, current knowledge and research is often reliant upon GLs that have been delineated decades earlier, despite the wide understanding that GLs have the potential to rapidly migrate during the intervening period.
This research, which is associated with ESA's Polar+ 4DGreenland study, aims to exploit a new generation of high-resolution DEMs to improve the spatial precision and temporal record of GL evolution for all GrIS ice tongues, thereby improving our understanding of GL migration. In this presentation we will provide an overview of the method, early results, and expected avenues for further research.
The area extent and duration of surface melt on ice sheets are important parameters for climate and cryosphere research and key indicators of climate change. Surface melting has a significant impact on the surface energy budget of snow areas, as wet and refrozen snow typically have a relatively low albedo in the visible and near-infrared spectral regions. Moreover, enhanced surface meltwater production may drain to the bed and raise the subglacial water pressure, which can have a strong impact on glacier motion. Surface melt also plays an important role for the stability of ice shelves, as the intensification of surface melting as precursor to the break-up of ice shelves in the Antarctic Peninsula has shown.
Passive and active microwave satellite sensors are the main data sources for products on melt extent over Greenland and Antarctica. In particular, passive microwave data has been widely used to map and monitor melt extent on ice sheets. C-band SAR has several advantages over passive microwave radar, including the ability to detect wet snow below a frozen surface and more sensitivity to the melting state of the snow volume. The better sensitivity of C–band to the physical properties of internal snow and firn layers on ice sheets and glaciers is of relevance for modelling of meltwater production and energy fluxes in the snow volume. The limited availability of SAR data over the ice sheets, that existed in the past, has been overcome with the launch of the Copernicus Sentinel-1 (S-1) mission. S-1 SAR data are now regularly acquired every 6 to 12 days, allowing for detailed time series analysis at a high resolution.
To evaluate snowmelt dynamics and melting/refreezing processes in Greenland and Antarctica, we have developed and implemented an algorithm for generating maps of snowmelt extent based on multitemporal S-1 SAR and METOP-A/B/C ASCAT scatterometer data. The detection of melt relies on the strong absorption of the radar signal by liquid water. The dense backscatter time series yields a unique temporal signature that is used, in combination with backscatter forward modelling, to identify the different stages of the melt/freeze cycle and to estimate the melting intensity of the surface snowpack. The high-resolution S-1 SAR data are complemented by daily lower resolution backscatter maps acquired with ASCAT to cover the complete time period from 2007 onwards. The melt maps form the main input for deriving value-added products on annual melt onset, ending and duration. Intercomparisons with in-situ weather station data and melt products derived from regional climate models (RCMs) and passive microwave radiometers confirm the ability of the algorithm to detect short-lived and longer melt events.
Our results demonstrate the excellent capability of the S-1 mission in combination with ASCAT for operational monitoring of snowmelt areas in order to produce a consistent climate data record on the presence of liquid water and snow properties in Greenland and Antarctica for studying surface melt processes.
Sea level rise is among the most pressing environmental, social and economic challenges facing humanity, and requires timely and reliable information for adaptation and mitigation. Narrow ice sheet outlet glaciers, such as those draining many marine sectors of the Antarctic and Greenland Ice Sheets, can make rapid contributions to sea level rise, and are sensitive to climate change with marked spatiotemporal variability in recent decades. However, estimating surface elevation and volume changes of these small, and often complex, glaciers has been notoriously challenging, thus limiting our ability to accurately constrain their mass balance. Satellite radar altimetry has proven useful in tracking variations in elevation across large parts of the ice sheets and offers higher spatial resolution and temporal sampling. However, this technique suffers from incomplete measurements and larger uncertainties over narrow and rugged outlet glaciers.
In response to the increasing need to derive reliable elevation and volume changes of narrow and complex glaciers, this study aims to explore new approaches to retrieving elevation measurements from radar altimetry, using methods that originate from the field of hydrology. The proposed approach consists of testing improved altimeter footprint selection over narrow targets, multi-peak waveform retracking, and off-nadir correction methods that are suited for small glaciers. New high resolution elevation measurements (e.g., NASA's ICESat-2 (Ice, Cloud and land Elevation Satellite-2)) and/or Digital Elevation Models (DEM's) will also be exploited to provide a priori information for enhanced altimeter retrievals. Within the study, these processing techniques will be applied in several test cases comprising ice sheet outlet glaciers surrounded by complex topography. If successful, the developed framework has the potential to further extend the capability of satellite radar altimetry over complex glaciological targets, and to improve the accuracy and coverage of the measurements needed to understand the extent, magnitude, and timescales of glacier change across these regions.
A key component of the Greenland ice sheet surface mass balance is the occurrence of extreme precipitation (snowfall) events, during which warm air masses bring moist air onto the Greenland ice sheet and deposit massive amounts of snow in the affected area. These events are common in the southeastern parts of the ice sheet but are also observed in other places such as in the northwest. In October 2016 extra-tropical cyclones Matthew and Nicole hit Greenland over a two-week period near the town of Tasiilaq. Matthew gave record-high rainfall at Tasiilaq, whereas the precipitation from Nicole hit predominantly over the ice sheet as snow. The high-resolution numerical weather prediction (NWP) model HARMONIE-AROME results (displayed at Polarportal.dk) used to drive a surface mass budget (SMB) model show a peak in Greenland surface mass balance of 12 Gt/day during this event, mainly driven by the snowfall on the Greenland east coast. Another less observed event occurred in October 2019 near Thule in the northwestern part of the Greenland ice sheet. Here, the nearby meteorological station at Qaanaaq does not measure precipitation but did measure increased relative humidity that gives an indication of a large precipitation event on the ice sheet. The NWP model here estimates a deposition of about 4 Gt/day of snow in the area during the event.
The occurrence of extreme precipitation events is a difficult phenomenon to model at typical scales of existing regional climate models (RCM) and the limited in-situ observations of these events on ice sheets make it even harder to improve model estimates of accumulation in both space, time, and quantity. These problems are an order of magnitude bigger in Antarctica where extreme precipitation events also contribute disproportionately to Ice sheet mass budget. Luckily, we are now in a golden era of satellite radar altimetry with multiple satellites measuring elevation change at different radar frequencies. With the difference in frequency comes also differences in the ratio between volume and surface scattering observed by the individual missions. In addition to multiple satellite radar altimeters, we also have a massive lidar dataset available from ICESat-2. In Greenland, we are fortunate also to have the high quality PROMICE weather station data sets that allow us to calibrate and evaluate both satellite and model outputs in some specific areas.
Hence, it is time to unify this wealth of satellite data to provide a new source of observations to shed insight on the occurrence of extreme precipitation events and thereby improve the predictive capabilities of NWPs. As satellite altimeters are so diverse in their instrumental setup and sensing capabilities, we first divide our efforts along three parallel lines of work:
(1) Conventional radar altimeter (elevation retrieval) investigations. The raw elevation measurements of either Ku-/Ka-band radar altimetry are affected differently by changes in surface properties. Initial studies have shown how the range to the Greenland ice sheet changes differently in the two frequencies, which may be related to surface conditions varying throughout time. This difference is used to map the first order surface behavior during the extreme precipitation events.
(2) Enhanced radar altimeter (surface power modeling) investigations. The strength with which radar waves are reflected are affected by several physical factors, including the contrast in electromagnetic properties across the surface interface as well as the roughness of that interface. Both are expected to change during the occurrence of the extreme precipitation events and may serve as a secondary proxy for precipitated snow.
(3) Laser altimetry (ICESat-2). The multiple returns of the Lidar photons allow for further investigations into individual snow regimes before, during and after the occurrence of the extreme precipitation event. Examining the photons reflected off the subsurface snow, surface snow, and/or blowing snow; thereby provides further insight into the nature of the events.
Finally, combining all three pieces of the puzzle provided by satellite altimetry into a common view of the extreme precipitation events will provide the needed in situ observations to ensure improvements for the predictive capabilities of climate models in the future and truly make use of this current golden era of satellite altimetry. We apply this analysis in the first instance to evaluate high magnitude precipitation events over the Greenland ice sheet in the newly released Copernicus Arctic Regional Reanalysis. The reanalysis is run in unprecedented high resolution with 3d variational data assimilation with a state-of-the-art numerical weather prediction model.
The Getz region is a large, marine-terminating sector of West Antarctica, which is losing ice at an increasing rate; however, the forcing mechanisms behind these changes remain unclear. Despite the area of the Getz Ice Shelf remaining relatively stable over the last 3 decades, strong ice shelf thinning has been observed since the 1990s. The region is one of the largest sources of fresh water input to the Southern Ocean, more than double that of the neighbouring Amundsen Sea ice shelves. In this study we use satellite observations including Sentinel-1, and the BISCILES ice sheet model, to measure ice speed and mass balance of Getz over the last 25-years. Our observations show a mean speedup of 23.8 % between 1994 and 2018, with three glaciers speeding up by over 44 %. The observed speed up is linear and is directly correlated with ice sheet thinning, confirming the presence of dynamic imbalance in this region. The Getz region has lost 315 Gt of ice since 1994 contributing 0.9 ± 0.6 mm to global mean sea level, with an increased rate of ice loss since 2010 caused by a reduction in snowfall. On all glaciers, the speed increase coincides with regions of high surface lowering, where a ~50% speed up corresponds to a ~5% reduction in ice thickness. The pattern of ice speedup indicates a localised response on individual glaciers, demonstrating the value of high spatial resolution satellite observations that resolve the detailed pattern of dynamic imbalance across the Getz drainage basin. Partitioning the influence of both surface mass and ice dynamic signals in Antarctica is key to understanding the atmospheric and oceanic forcing mechanisms driving recent change. Dynamic imbalance accounts for two thirds of the mass loss from Getz over the last 25-years, with a longer-term response to ocean forcing the likely driving mechanism. Consistent and temporally extensive sampling of both ocean temperatures and ice speed will help further our understanding of dynamic imbalance in remote areas of Antarctica in the future. Following this work, 9 of the 14 glaciers in the region have recently been named after the locations of major climate conferences, treaties and reports, celebrating the importance of international collaboration on science and climate policy action.
Global sea level rise and associated flood and coastal change pose the greatest climate change risk to low-lying coastal communities. Over the past century, global sea level has risen 1.7 ± 0.3 mm per year on average, although this figure has risen to 3.7 ± 0.5 mm per year between 2006 and 2018 (IPCC AR6), and models predict that this acceleration in global sea level rise is only set to continue. Earth’s ice sheets present a large uncertainty in the global sea level budget, therefore it is vital to monitor ice flow in Antarctica in order to quantify the size and timing of the ice sheet’s contributions to global sea level rise.
Satellite observations have shown that the West Antarctic Ice Sheet is dynamically imbalanced, as ice mass loss from the flow of outlet glaciers is larger than mass gained via snow accumulation. In contrast, East Antarctica is thought to be in either equilibrium or in positive mass balance over the last 20 years (Shepherd et al., 2012), although some regions of localised thinning have been observed (McMillan et al., 2014). Although East Antarctica has contributed 7.4 ± 2.4 mm sea level rise since 1992 (IPCC AR6), the accuracy and thus significance of this ice loss with regards to sea level rise over the last 30 years is uncertain (Rignot et al., 2019). The Lambert Glacier - Amery Ice Shelf drainage basin is one of the largest in East Antarctica, and therefore is important in assessing Antarctica’s present and future sea level contribution.
In this study we present ice velocity measurements from late 2014 to the present day, using intensity feature tracking of Synthetic Aperture Radar (SAR) image pairs acquired predominantly by the Copernicus Sentinel-1 mission. We use 6-day repeat pass Single Look Complex (SLC) SAR images acquired in Interferometric Wide (IW) swath mode from both Sentinel-1a and Sentinel-1b satellites, to investigate ice velocity changes on a weekly timescale. Focused initially on Lambert Glacier in Eastern Antarctica, these ice velocity results are combined with surface and bed topography measurements to determine ice flux, then converted to mass balance using the input-output method, to assess ice mass change over time in Eastern Antarctica.
Ice loss from Antarctica and Greenland has caused global mean sea level to rise by more than 1.8 cm since the 1990’s, and observations of mass loss are currently tracking the IPCC AR5’s worst-case model scenarios (Slater et al., 2020). Satellite observations have shown that ice loss in Antarctica is dominated by ice dynamic processes, where mass loss occurs on ice streams that speed up and subsequently thin, such as in the Amundsen Sea Embayment in West Antarctica. Here this thinning and the related retreat of ice sheet grounding lines has been recorded since the 1940’s, and is driven by the advance of warm modified Circum-polar Deep Water onto the continental shelf which melts the base of the floating ice shelves. This incursion is linked to atmospheric forcing driven by the El Niño-Southern Oscillation (ENSO). Ice velocity observations can be used in conjunction with measurements of thickness and surface mass balance to determine ice sheet mass balance. This is essential as the ice sheet contribution to the global sea level budget remains the greatest uncertainty in future projections of sea level rise (Robel et al., 2019), driven in part by positive feedbacks such as the Marine Ice Sheet Instability (MISI). Both long term and emerging new dynamic signals must be accurately measured to better understand how ice sheets will change in the future, and consistent records from satellite platforms are required to separate natural variability from anthropogenic signals (Hogg et al., 2021).
In this study we present measurements of ice stream velocity Amundsen Sea sector of West Antarctica. Our results cover the whole operational period of Sentinel 1, from 2014 onwards, and are determined using intensity feature tracking on pairs of Level 1 Interferometric Wide (IW) swath mode Single Look Complex (SLC) Synthetic Aperture Radar (SAR) images from both the Sentinel 1 A and B satellites. We show that during the study period ice speeds have changed on a number of glaciers in the study region, including Pine Island Glacier, demonstrating the critical importance of continuous, near-real-time monitoring from satellites.
Slater, T., Hogg, A.E. & Mottram, R. (2020) Ice-sheet losses track high-end sea-level rise projections. Nat. Clim. Chang. 10, 879–881; DOI: 10.1038/s41558-020-0893-y
Robel, A.A., Seroussi, H. & Roe, G.H. (2019) Marine ice sheet instability amplifies and skews
uncertainty in projections of future sea-level rise P.N.A.S. 116 (30); DOI: 10.1073/pnas.1904822116
Hogg, A.E., Gilbert, L., Shepherd, A., Muir, A.S. & McMillan, M. (2021) Extending the record of Antarctic ice shelf thickness change, from 1992 to 2017. A.S.R.; DOI: 10.1016/j.asr.2020.05.030
Satellite and tower-based SAR observations of boreal forests were investigated for studying the influence of temperature changes on the SAR backscatter of ground surface and forest canopy during the winter. Soil freezing increases the penetration of microwave radiation into the soil, thus reducing the observed backscatter in a wide range of microwave frequencies. Recent studies show that decreasing winter air temperatures causing gradual freezing of the tree canopy increases the canopy transmissivity (optical depth) for microwaves in L-W frequency bands (Li et al., 2019; Schwank et al., 2021). Similarly, radar backscatter from vegetation has been observed to decrease due to freezing for P- to L-bands (Monteith et al., 2018). However, the observed backscatter over forest canopies with the Sentinel-1 C-band SAR increased in very cold winter conditions following canopy freezing (Cohen et al, 2019). The structure and the freezing process affecting the microwave signature of boreal forest canopies are complex. The influence of decreasing air temperature and the consequent canopy freezing on the SAR backscatter has not yet been deeply investigated. Understanding the effect of freezing canopy on the backscatter in below-zero temperatures is important for instance in satellite SAR based retrieval of the freeze/thaw (F/T) state of the soil, as well as in the detection of other surface parameters.
In this study, we analyzed more than 50 ALOS-2 L-band and a similar number of Sentinel-1 C-band SAR satellite acquisitions acquired during winters 2019-2020 and 2020-2021 from Northern Finland. We also performed continuous tower-based SAR measurements in L-, S-, C- and X-bands during the same time periods over a test plot of boreal forest located in the Sodankylä Arctic Space Centre, Northern Finland. A simple water cloud model (Attema and Ulaby, 1978) was applied to simulate the SAR observations of the different frequencies, for retrieving the components affecting the total observed backscatter, such as the ground and canopy backscatter and the canopy transmissivity, in various winter conditions. Special attention was given to the influence of below-zero air temperature changes on the backscatter of the forest canopy, and the implications on satellite SAR based detection of the soil F/T state in the boreal forest environment.
Our preliminary results show that for all analyzed frequencies canopy freezing increases the transmissivity of the forest canopy, when comparing reflecting targets set beneath the forest canopy to reference targets in open areas. On the other hand, for the same forests, changes in the observed backscatter over the forest canopy caused by very cold winter air temperatures were the opposite for high (C, X) and low (L, S) frequencies. As observed previously for Sentinel-1, freezing of the canopy increased the backscatter observed over the forest canopy for C-band SAR. For the higher frequency X-band, the increase in canopy backscatter following canopy freezing was even more prominent. However, for the lower frequency S- and L-bands, canopy freezing led to reduced overall backscatter over forest canopies. Concerning satellite-based soil F/T detection with L-band SAR, these results are encouraging, as the freezing of both soil and canopy lead to lower observed backscatter over boreal forests. Instead, for C-band, the freezing of soil decreases the backscatter from the ground, but the canopy freezing increases the observed backscatter over the canopy, adding complexity to the satellite-based soil F/T detection. Additional research regarding the relation between the canopy transmissivity and canopy backscatter following air temperature changes is required, in order to gain better understanding on the overall behavior of the forest canopy in SAR remote sensing.
Attema E. P. W. and Ulaby F. T., (1978). Vegetation modelled as a water cloud. Radio Science, vol. 13, no. 2, pp. 357-364.
Cohen J., Rautiainen K., Ikonen J., Lemmetyinen J., Smolander T., Vehviläinen J., and Pulliainen J., (2019). A Modeling-Based Approach for Soil Frost Detection in the Northern Boreal Forest Region With C-Band SAR, IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 2, pp. 1069-1083.
Monteith A., and Ulander L., (2018). Temporal Survey of P- and L-Band Polarimetric Backscatter in Boreal Forests. IEEE JSTARS, vol 11, no. 10, pp. 3564-3577.
Schwank M., Kontu A., Mialon A., Naderpour R., Houtz D., Lemmetyinen J., Rautiainen K., Li Q., Richaume P., Kerr Y, and Mätzler C., (2021). Temperature effects on L-band vegetation optical depth of a boreal forest, Remote Sensing of Environment, vol. 263.
Pronounced climatic changes have been observed at the Antarctic Peninsula within the past decades and its glaciers and ice caps have been identified as a significant contributor to global sea level rise. Dynamic thinning and speed-up was reported for various tidewater glaciers on the western Antarctic Peninsula. On the east coast, several ice shelves disintegrated since 1995. Consequently, former tributary glaciers showed increased flow velocities due to the missing buttressing, leading to substantial ice mass loss. Various studies were carried out to quantify the ice mass loss and ice discharge to the ocean at the Antarctic Peninsula using different approaches. However, the results are still subject to substantial uncertainties, in particular for the northern section of the Antarctic Peninsula ( < 70°S).
Thus, the aim of this project is to carry out an enhanced analysis of glacier mass balances and ice dynamics throughout the Antarctic Peninsula ( < 70°S) using various remote sensing data, in-situ measurements and model output. By analyzing bistatic SAR satellite acquisitions, an spatially detailed coverage with surface elevation change information at the study area will be achieved to compute geodetic glacier mass balances on regional and glacier scales. Information on ice dynamics will be derived from multi-mission SAR acquisitions using offset tracking techniques. In combination with the latest ice thickness data sets the spatiotemporal variability of the ice discharge to the ocean will be evaluated. By including information from in-situ measurements and model output of atmospheric and oceanic parameters, the driving factors of the obtained change patterns will be assessed to enhance the understanding of the ongoing change processes.
In the polar regions, the state of the surface is essential to understanding and predicting the surface energy and mass budgets, which are two key snow-meteorological variables for the study of the climate and the contribution to the sea level rise of ice-sheets. The inter-annual variations in melt duration and extent are valuable indicators of the summer climate in the coastal regions of the ice-sheets, especially on ice-shelves where melt water contributes to hydrofracturing and destabilisation.
Liquid water has a significant impact on the microwave emissivity of the surface and several studies exploited the brightness temperature timeseries at the 19, 37, 1.4 GHz to provide binary melt indicators (Torinesi et al., 2003, Picard et al., 2006, Leduc-Leballeur et al., 2020). However, these indicators showed differences, which point out difference of depth up to which it is possible to detect the water presence at the different freqeuncies. For example, comparisons performed between the melt seasons obtained from 1.4 GHz observations with the Soil Moisture and Ocean Salinity (SMOS) satellite and 19 GHz observations with the special sensor microwave imager (SSM/I) satellite showed that the large penetration depth at 1.4 GHz could detect wet snow in depth, contrary to 19 GHz which is limited to the upper centimeters from the surface. As a consequence also the duration of the melt season (onset, freeze up) as observed by the different frequency can varying. This highlights the potential of the multifrequency combination to provide complementary information.
In the framework of the ESA 4D-Antarctica project, we propose to combine the binary melt indicators from the single-frequency to provide enhanced insights of the melt process. We focus on the 36 GHz and 19 GHz observations from of the Advanced Microwave Scanning Radiometer 2 (AMSR2) satellite and the 1.4 GHz observations from SMOS. A deep theoretical analysis has been performed to explore the sensitivity of these frequencies to wet snow. In particular, we noted the potential of 36 GHz to distinguish different stage of close surface melting and the 1.4 GHz identifies the most intense period of melt during the summer. Moreover, AMSR2 provides observations in the afternoon (ascending pass) and in the night (descending pass). This allows to detect the possible presence of a refrozen surface layer based on 19 GHz and 36 GHz. The final combined indicator is composed of seven melt status, which match to a particular physical description of the snowpack. It allows determining if a melt event was limited to the surface of the snowpack or if it was intense enough to inject significant water amounts at depths, and if refreezing happens during the night. This new product provides a clear and synthetic description of the melt status along the season. This opens a good opportunity for a potential use for the Copernicus Imaging Microwave Radiometer (CIMR) perspective.
Our understanding of the Antarctic Ice Sheet’s response to climate change is limited. Quantifying the processes that drive changes in ice mass or ice sheet elevation is needed to improve it. So far, signals related to surface mass balance (SMB) and firn compaction are poorly constrained, especially on a sub basin scale. We analyze these signals by distinguishing between fluctuations at decadal to monthly time scales (‘weather effects’) and long-term trends due to past and ongoing climate change (‘climate effects’). We use surface elevation changes (SEC) from multi-mission satellite altimetry and firn thickness variations from SMB and firn modelling results over the time period 1993 to 2016. Dominant temporal patterns are identified by the model output. They capture the occurrence of events affecting firn thickness and characterize the weather effects. We fit these patterns to the temporal variations of altimetric SEC by estimating the related amplitudes and spatial patterns. First results indicate stronger amplitudes of the weather effects observed by altimetry than by the model results with an increase of this difference in amplitudes towards the ice sheet margins. By means of our approach, it is possible to characterize in a statistical sense and to quantify in a deterministic sense the weather-induced fluctuations in firn thickness at a local scale and explore unexplained signals in the altimetric SEC that may imply SMB-related climate effects apart from effects induced by changing ice flow dynamics. A better understanding of the ice sheet processes can then contribute to improvements in SMB and firn modelling.
Regional climate models (RCM) compute ice sheet surface mass balance (SMB) over Antarctica using reanalysis data to get the best estimate of present-day SMB. Estimates of the SMB vary between RCMs due to differences such as the dynamical core, physical parameterizations, model set-up (resolution and nudging), and topography as well as ice mask. The ice mask in a model defines the surface covered by glacier ice where the glacier surface scheme needs to be applied. Here we show that, as different models use slightly different ice masks, there is a small, but important difference in the area covered by ice that leads to important differences in SMB when integrated over the continent. To circumvent this area-dependent bias, intercomparison studies of modelled SMB use a common ice mask (Mottram et al., 2021). The SMB in areas outside the common ice mask, which are typically coastal and high precipitation regions, are discarded. By comparing the native ice masks with the common ice mask used in Mottram et al. 2021 we find differences in integrated SMB of between 40.5 to 140.6 Gt (Gigatonnes) per year over the ice sheet including ice shelves and between 20.1 and 102.4 Gt per year over the grounded part of the Antarctic ice sheet when compared to the ensemble mean from Mottram et al. 2021. These differences are nearly equivalent to the entire Antarctic ice sheet mass imbalance identified in the IMBIE study.
SMB is particularly important when estimating the total mass balance of an ice sheet via the input-output method, by subtracting ice discharge from the SMB to derive the mass change. We use the RCM HIRHAM5 to simulate the Antarctic climate and force an offline subsurface firn model, to simulate the Antarctic SMB from 1980 to 2017. We use discharge estimates from two previously published studies to calculate the regional scale mass budget. To validate the results from the input-output method, we compared the results to the gravimetry-derived mass balance from the GRACE/GRACE-FO mass loss time series, computed for the period 2002–2020. We find good agreement between the two input-output results and GRACE in West Antarctica, however, there are large disagreements between the two input-output methods in East Antarctica and over the Antarctic Peninsula. Over the entire grounded ice sheet, GRACE detects a mass loss of 900 Gt for the period 2002-2017, whereas the two input-output results show a mass gain of 500 Gt and a mass loss of 4000 Gt, depending on which discharge dataset is used. These results are integrated over the native HIRHAM5 ice mask. If we instead integrate over the common ice mask from Mottram et al. 2021, the results change from a mass gain of 500 Gt to a mass loss of 500 Gt, and a mass loss of 4000 Gt to a mass loss of 5000 Gt, over the grounded ice sheet for this period. While the differences in ice discharge remain the largest sources of uncertainty in the Antarctic ice sheet mass budget, our analysis shows that even a small area bias in modelled ice mask can have huge impact in high precipitation areas and therefore SMB estimates. We conclude there is a pressing need for a common ice mask protocol, to create an accurate harmonized updated ice mask.
The grounding line marks the transition between ice grounded at the bedrock and the floating ice shelf. Its location is required for estimating ice sheet mass balance [Rignot & Thomas, 2002], modelling of ice sheet dynamics and glaciers [Schoof 2007], [Vieli & Payne, 2005] and evaluating ice shelf stability [Thomas et al., 2004], which merits its long-term monitoring. The line migrates both due to short term influences such as ocean tides and atmospheric pressure, and long-term effects such as changes of ice thickness, slope of bedrock and variations in sea level [Adhikari et al., 2014].
The grounding line is one of four parameters characterizing the Antarctic Ice Sheet (AIS) ECV project within ESA’s Climate Change Initiative (CCI) programme. The grounding line location (GLL) geophysical product was designed within AIS_CCI and has been derived through the double difference InSAR technique from ERS-1/2 SAR, TerraSAR-X and Sentinel-1 data over major ice streams and outlet glaciers around Antarctica. In the current stage of the CCI project, we have interferometrically processed dense time series throughout the year from the Sentinel-1 A/B constellation aiming at monitoring the short-term migration of the DInSAR fringe belt with respect to different tidal and atmospheric conditions. Whereas the processing chain runs automatically from data download to interferogram generation, the grounding line is manually digitized on the double difference interferograms. Inconsistencies are introduced due to varying interpretation among operators and the task becomes more challenging when using low coherence interferograms. On a large scale this final stage of processing is time consuming, hence urging the need for automation.
An attempt in this direction was made in the study of [Mohajerani et al., 2021], where a fully convolutional neural network (FCN) was used to delineate grounding lines on Sentinel-1 interferograms. In a similar vein, the performance of deep learning paradigms for glacier calving front detection [Cheng et al., 2021], [Baumhoer et al., 2019], showcase the strengths of using machine learning for such tasks. However, unlike grounding lines, calving fronts are visible both in optical and SAR imagery. This makes available a greater amount of training data. The visibility of the calving front also enables the use of classical image processing techniques [Krieger & Floricioiu, 2017]. Additionally, the complexity of InSAR processing and wrapped phases is absent.
This study further investigates the feasibility of automating the grounding line digitization process using machine learning. The training data consists of double difference interferograms and corresponding manually delineated AIS_CCI GLL’s derived from SAR acquisitions between 1996 - 2020 over Antarctica. In addition to these, features such as ice velocity, elevation information, tidal displacement, noise estimates from phase and atmospheric pressure are analyzed as potential inputs to the machine learning network. The delineation is modelled both as a semantic segmentation problem, as well as a boundary detection problem, exploring popular existing architectures such as U-Net [Ronneberger et al., 2015], SegNet [Badrinarayanan et al., 2017] and Holistically-nested Edge Detection [Xie & Tu, 2015]. The resulting grounding line predictions will be examined with respect to their usability in the detection of short-term variations of the grounding line as well as the potential separation of a signal of long-term migration. The detection accuracy will be compared to the one achieved by human interpreters.
References
Adhikari, S., Ivins, E. R., Larour, E., Seroussi, H., Morlighem, M., and Nowicki, S. (2014). Future Antarctic bed topography and its implications for ice sheet dynamics, Solid Earth, 5, 569–584
Baumhoer, C. A., Dietz, A. J., Kneisel, C., & Kuenzer, C. (2019). Automated extraction of antarctic glacier and ice shelf fronts from sentinel-1 imagery using deep learning. Remote Sensing, 11(21), 2529
Badrinarayanan, V., Kendall, A., Cipolla, R., (2017). Segnet: A deep convolutional encoder-decoder architecture for scene segmentation. IEEE transactions on pattern analysis and machine intelligence.
Cheng, D., Hayes, W., Larour, E., Mohajerani, Y., Wood, M., Velicogna, I., & Rignot, E. (2021). Calving Front Machine (CALFIN): glacial termini dataset and automated deep learning extraction method for Greenland, 1972–2019. The Cryosphere, 15(3), 1663-1675
Krieger, L., & Floricioiu, D. (2017). Automatic calving front delienation on TerraSAR-X and Sentinel-1 SAR imagery. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
Mohajerani, Y., Jeong, S., Scheuchl, B., Velicogna, I., Rignot, E., & Milillo, P. (2021). Automatic delineation of glacier grounding lines in differential interferometric synthetic-aperture radar data using deep learning. Scientific reports, 11(1), 1-10.
Rignot, E., & Thomas, R. H. (2002). Mass balance of polar ice sheets. Science, 297(5586), 1502-1506.
Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7
Schoof, C. (2007). Ice sheet grounding line dynamics: Steady states, stability, and hysteresis, J. Geophys. Res., 112, F03S28, doi:10.1029/2006JF000664.
Xie, S., Tu, Z., 2015. Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1395-1403
Thomas, R., Rignot, E., Casassa, G., Kanagaratnam, P., Acuña, C., Akins, Brecher, H., Frederick, E., Gogineni, P., Krabill, W., Manizade, S., Ramamoorthy, H., Rivera, A., Russell, R., Sonntag, J., Swift, R., Yungel, J., & Zwally, J., (2004). Accelerated sea-level rise from West Antarctica. Science, 306(5694), 255-258.
Vieli, A., & Payne, A. J. (2005). Assessing the ability of numerical ice sheet models to simulate grounding line migration, J. Geophys. Res., 110, F01003, doi:10.1029/2004JF000202
Surface elevation measurements of the ice sheets are the primary component of mass balance studies, and ESA’s CryoSat-2 mission provides the most complete record and coverage of ice sheet change since its launch in 2010. For mass balance studies it is essential that this 12 year+ record of elevation measurements is made available to users from a consistent, state of the art, and validated radar altimetry processing baseline, otherwise there is a high likelihood of introducing steps in the measurement timeseries leading to incorrect mass balance results. Due to the complexity and restrictions on the complete ESA mission ground segment, standard operational CryoSat-2 L2 products have a completed full mission reprocessing to incorporate new evolutions approximately every 2.5 years, and during the intervening period a mix of two baselines are often present, resulting in a potentially inconsistent measurement time series. This is a serious issue for scientists interested in mass balance research, and significantly restricts usage to radar altimetry experts with an in-depth technical knowledge of the product differences.
The ESA Cryo-TEMPO project aims to solve this problem by developing a new agile full mission data set of thematic CryoSat products (for land ice, sea ice, polar oceans, coastal oceans and inland waters), released on an annual basis. The thematic products are developed using dedicated state-of-the-art processing algorithms for each thematic domain, driven by and aligned with both expert and non-expert users needs, and for the first time include traceable and transparent measurement uncertainties. The products are validated by a group of thematic users, thus ensuring optimal relevance and impact for the intended target communities.
Here, we present validation results from the first full mission release of Cryo-TEMPO land ice products, providing details of the new products, and the processing evolutions, which will benefit all users requiring land ice elevation and associated essential measurements for mass balance studies. We also show details of the new Cryo-TEMPO portal allowing users to explore the Cryo-TEMPO land ice product maps, measurement statistics, and monthly reports over Greenland, Antarctica and sub-regions of interest for the full mission period.
For several decades, Synthetic Aperture Radar (SAR) satellites have been applied in the measurement of the velocity of glaciers and ice sheets. Compared to earlier missions, the Sentinel-1 satellites, with a wide-swath acquisition mode and a 6-day repeat-pass period, provide a polar data archive of unprecedented size, allowing for frequent revisit of outlet glaciers and the interior Greenland ice sheet. Amplitude-based tracking methods are routinely applied to generate average ice velocity measurements on time scales ranging from a month to multiple years. On shorter time scales, noise levels in tracking-based measurements approach tens of m/y [1, 2], which is close to the signal level in the ice sheet interior and upstream parts of glaciers. Conversely, Differential SAR Interferometry (DInSAR), which is based on the radar phase signal, allows for velocity measurements with a significantly lower noise level ( < 0.5 m/y [3]) and higher resolution. Consequently, averaging of multiple acquisitions is generally not necessary to achieve measurements of high accuracy, even in slow-moving regions, and hence high quality velocity measurements can be made every six days.
A limitation of DInSAR is that it is only applicable in areas where interferometric coherence is retained, meaning that the very fast-flowing parts of outlet glaciers cannot be measured, due to phase aliasing. Hence, an obvious synergy exists between the tracking- and phase-based methods, which has been exploited for past SAR missions [1, 2]. For Sentinel-1, however, DInSAR has not been routinely applied in the retrieval of ice velocity, owing to additional challenges caused by a coupling between the differential phase and azimuth motion introduced by the TOPS acquisition mode. Recently, a solution to these challenges has been proposed [3, 4], unlocking the possibility for an improved exploitation of the Sentinel-1 archive.
The Northeast Greenland Ice Stream (NEGIS) is the only major dynamic feature of Greenland that extends continuously into the interior of the ice sheet near Greenland’s summit. Zachariae Isstrøm, Nioghalvfjerdsfjorden glacier and Storstrømmen, which form the NEGIS, drain an area representing more than 16% of the Greenland ice sheet. While Nioghalvfjerdsfjorden and Storstrømmen are still close to mass balance, Zachariae Isstrøm has begun a rapid retreat after detaching from a stabilizing sill in the late 1990s. Since 1999, the glacier flow has almost doubled and its acceleration has increased significantly after 2012, resulting in significant mass loss of this sector of Greenland [5]. Destabilization of this marine sector could increase sea level rise from the Greenland ice sheet for decades to come. While these changes in ice mass and motion are well documented near the ice margin, it remains to be established how the interior of the ice sheet responds to the change in stress balance that occurs at its margin. In other words, the extent to which multi-year and seasonal changes in dynamics due to variations in the calving front position propagate upstream of the glaciers is still unclear.
In this work, we apply Sentinel-1 DInSAR to generate a long, densely sampled time series of ice velocity measurements for the NEGIS. All available Sentinel-1 acquisitions are used, meaning that the temporal sampling is 6 days (12 days prior to the launch of Sentinel-1B) and the spatial sampling is 50x50 m. The goal is to investigate any long- and/or short-term changes in velocity as well as seasonal effects on the NEGIS. Similar studies have previously been carried out with tracking-based methods, typically focusing on the downstream parts of glaciers, where changes in velocity exceed the amplitude-tracking noise levels. In this study, we focus on velocity changes in the slower-moving upstream parts, where the higher accuracy and spatial/temporal resolution of DInSAR allows for significantly improved results.
Finally, we discuss the benefits and challenges of SAR interferometry compared to tracking methods in monitoring dynamical changes and conclude on the amplitude and extent of the current flow acceleration of the NEGIS due to the recent retreat of Zachariae Isstrøm.
References
[1] I. Joughin, B. E. Smith, and I. M. Howat, “A complete map of Greenland ice velocity derived from satellite data collected over 20 years,” Journal of Glaciology, vol. 64, no. 243, pp. 1–11, (2018)
[2] J. Mouginot, E. Rignot, and B. Scheuchl, “Continent-wide, interferometric SAR phase, mapping of Antarctic ice velocity,” Geophysical Research Letters, vol. 46, pp. 9710–9718, (2019)
[3] J. Andersen, A. Kusk, J. Boncori, C. Hvidberg, and A. Grinsted, “Improved ice velocity measurements with Sentinel-1 TOPS interferometry,” Remote Sensing, vol. 12, no. 12, (2020)
[4] A. Kusk, J. K. Andersen, and J. P. M. Boncori, “Burst overlap coregistration for Sentinel-1 TOPS DInSAR ice velocity measurements,” IEEE Geoscience and Remote Sensing Letters, (2021)
[5] J. Mouginot, E. Rignot, B. Scheuchl, I. Fenty, A. Khazendar, M. Morlighem, A. Buzzi, and J. Paden, "Fast retreat of Zachariæ Isstrøm, northeast Greenland", Science, vol. 350, no. 6266, (2015)
The volume of freshwater (solid and liquid) exported from glaciers to the ocean is important for the global climate system, as an increase in the freshwater content can slow down the large-scale thermohaline circulation and change the mass balance of the glaciers. The solid part of the freshwater is exported as icebergs that breaks off the marine terminating glaciers. Because of this, the iceberg density around Greenland is linked to the glacial surface velocities. As a direct consequence of the climate response, Arctic sea ice has experienced a rapid reduction in extent and thickness in recent decades, opening up the opportunity for increased shipping activities in the Arctic and and adjacent Seas. The increase in shipping is expected to continue as the ice-free season becomes longer, and new routes opens up. Icebergs are a large hazard to ships, especially in the near coastal areas of Greenland. Thus, detection of icebergs in all sizes, even growlers and bergy bits is important.
This presentation aims at linking the observed outflow of glacial ice to the total iceberg volume and based on this predict the iceberg density and the solid freshwater contribution from the glaciers. Here we will compare glacial outflow from observations of ice surface velocities through a defined fluxgate near Upernavik, which is located in the north-western Greenland and compare it to the iceberg volumes estimated from Copernicus Sentinel-1 SAR images using the Danish Meteorological Institute iceberg detection algorithm CFAR and high-resolution SPOT images using a semi-automatic classification algorithm.
The flux of glacial ice is calculated using a processor developed within the Polar Thematic Exploitation platform (P-TEP). It estimates time-series of glacial ice fluxes through a pre-defined fluxgate at a selected glacier, given a user defined input of surface velocity (vsurf). The solid ice discharge through a flux gate (F) of length (L) and ice thickness (H) is given by F=f*vproj*H*L , where f=0.93 is the mean ratio of surface to depth-averaged velocity and vproj is vsurf projected to the gate perpendicular velocity. We here use Morlighem bedrock model to estate the ice thickness at the fluxgate, and PROMICE and MEaSURE’s glacial surface velocities based on Copernicus Sentinel-1 SAR images.
The iceberg detection method based on the Copernicus Sentinel-1 SAR images is the so-called CFAR (Constant False Alarm Rate). This method detects the icebergs based on an assumed background intensity defined by the backscatter and an assumption that targets (icebergs) are detected as a backscatter value with a signal above this background intensity. The procedure utilized by the CFAR algorithm examines each pixel in the SAR imagery using a “sliding window”. The pixel in question is the pixel in the centre of the window and the background is represented by the window’s outer edge of pixels. The statistical distribution (Probability Density Function [PDF]) of the edge pixels is derived, and if the pixel in question is extremely unlikely to belong to the background intensity it will be classified as a target pixel.
The iceberg detection algorithm is known to be challenged within near coastal areas for several reasons. It overestimates the iceberg area and its volume, in particular for smaller icebergs, due to the relative coarse resolution of the SAR images. At the same time small growlers and bergy bits are not captured by the CFAR algorithm. At last in near coastal areas where the volume and number of iceberg covered pixels are large the background intensity will include icebergs, which then remain undetected.
For this reason, we will validate the CFAR algorithm with high resolution SPOT Images, and quantify the exported freshwater based on both methods. This will be useful not only for this study but also for estimates of the volume of ice that is not detected by the CFAR algorithm.
To summarize this presentation will:
1/ provide estimates of the solid freshwater discharge based on a fluxgate near Upernavik glacier outflow
2/ Correlate the high resolution SPOT images with the CFAR detections of icebergs near Upernavik.
3/ Compare volumes of ice based on the freshwater discharge to iceberg volumes based on estimates from both SPOT images and the CFAR algorithm.
Ice sheets store vast amounts of frozen water, capable of raising sea levels by over 60 m if fully melted. Meltwater runoff can also affect a range of glaciological and climatic processes including ocean driven melting, fjord dynamics and large-scale ocean circulation. As global temperatures continue to increase, accurately estimating the mass balance of ice sheets is vital to understanding contemporary and future sea-level changes.
Satellite altimetry measurements can provide us with continental-scale observations of surface elevation change (SEC). Once combined with firn densification models, this data record allows us to make estimates of ice mass losses to the oceans. Time series of surface elevation change, produced using altimetric data, are commonly generated in a simplistic manner, such as through averaging measurements in time. Here, we will explore the potential of employing more advanced statistical methods of time series analysis to improve the generation and interpretation of time series. One of these techniques is singular spectrum analysis (SSA), a model-free spectral estimation method for decomposing time series into the sum of different signal components. This method allows us to separate the unstructured residual components from the long-term trend and dominant oscillatory modes, such as seasonal cycles.
In this presentation, we will present two case studies that investigate how SSA can be applied to surface elevation change time series derived from satellite altimetry, which have formed part of methodological development undertaken within ESA’s Polar+ SMB feasibility study. (1) SSA shall be employed to remove noise from decade long CryoSat-2 radar altimetry SEC time series for areas of the Greenland Ice Sheet to improve their quality. The smoothed time series shall be validated against in situ and airborne datasets. (2) We will apply SSA to the long-term altimetry record for Antarctica to identify dominant periodicities longer than 2 years. This will aid our interpretation and allow us to investigate links to ocean and atmospheric circulations.
Here, we investigate the feasibility of directly measuring the variability of accumulation over the interior of the Greenland Ice Sheet using satellite radar altimetry. The principal driver of mass loss from the Greenland Ice Sheet since early 2000s has been the decline in net surface mass balance. Traditionally, information on SMB has come from climate model simulations alone, or from sparse in situ field sites. However, seasonal elevation changes that can be measured directly using satellite altimetry are representative of different SMB processes, with net ablation observed as a drop in the surface elevation confined to summer months, and net accumulation observed as an elevation gain. A recent study has shown it is possible to quantify ice sheet ablation and subsequent run off using CryoSat-2. Using satellite observations in this way allows SMB parameters to be measured in near real-time, at scale, across the ice sheet and provides an independent dataset for monitoring SMB processes.
With this study we aim to quantity a second parameter of SMB, ice sheet accumulation, by exploiting the high temporal sampling of radar altimeter missions to observe seasonal elevation changes. We present a method to produce seasonal rates of elevation change that can be applied to all radar altimeter missions with high (< 35 day) repeat track sampling. Demonstratively, we apply this method to data acquired by the Sentinel-3 radar altimeter, which has a 27 day repeat cycle, and produce estimates of elevation change at monthly temporal resolution to investigate rates of accumulation in the ice sheet interior between 2017 and 2021. Concurrently we produce time series of seasonal elevation changes from CryoSat-2. Both time series presented from Sentinel-3 and CryoSat-2 are validated against in situ data at Greenland summit. This work is a contribution to ESA’s Polar+ Earth Observation for Surface Mass Balance (EO4SMB) study.
The Greenland ice sheet melt is a key essential climate variable of global significance due to the impact on sea level rise and the risk of future changes to global ocean circulation due to increased freshwater output. Satellite altimetry missions such as CryoSat, IceSat-2 and AltiKa have given new insight into the sources of rapid changes, and along with the launch of Sentinel-1, -2 and 3 generated even more spectacular results, especially for the routine mapping of ice sheet flow velocities by feature tracking and SAR interferometry. On top of this GRACE and GRACE-FO have delivered reliable mass changes of ice sheet drainage basins, spectacularly illustrating the highly variable ice sheet melt behaviour, with record melt events in 2012 and 2019. All of these data are available through the ESA Climate Change initiative as validated grids and time series, readily usable for more detailed investigations and research.
We illustrate the consistency of the data by doing joint inversion of several CCI data sets, augmented with independent airborne and GNSS uplift data sets. Results across all the data sources show how the melt regions are located at the ice sheet margins and major outlet glaciers, and also how the most active changing regions change over time, as a function of regional changes in summer temperatures and ice dynamics. The changing ice sheet melt are compared to meteorological models of surface mass balance, further confirming the strong link between ice sheet melt and regional weather conditions.
Arctic glaciers and ice caps are currently major contributors to global sea level rise. The monitoring of smaller land-ice masses is challenging due to the high temporal and spatial resolution required to constrain their response to climate forcing. This dynamic response of land-ice to climate forcing constitutes the main uncertainty in global sea level projections for the next century. The relative significance of these forcings is currently unknown with most recent categorisations focusing on separating loss caused by internal dynamics versus surface mass balance changes, with only initial investigations into processes instigating these changes.
This leaves the specific roles of processes in the atmosphere, ocean and sea ice unconstrained. This knowledge is key to improving our projections of how these smaller land-ice masses will respond to future climate forcing and by extension their contribution to future sea level rise.
This study uses CryoSat-2 swath interferometric radar altimetry to provide high spatial and temporal observations to produce elevation timeseries for the land-ice masses in Svalbard Archipelago. It also utilises the regional atmospheric model (MAR) to gain timeseries of surface mass balance. These are combined with climate datasets, and by separating land-ice mass into land versus marine terminating, are used to quantify the effects of different processes. Additionally, in order to observe the relative impact of atmospheric versus oceanic forcing, an ocean thermal forcing model, previously used to study Greenland’s outlet glaciers, has been initialised.
The aim of this case study is to develop a framework that will quantify the connections and processes linking loss of land-ice to processes in the ice and, atmosphere and sea ice across the Arctic region.
The grounding line positions of Antarctic glaciers are needed as an important parameter to assess ice dynamics and mass balance in order to record the effects of climate change to the ice sheets as well as to identify the driving mechanisms for these. In order to address this need, ESA’s Climate Change Initiative (CCI) produced interferometric grounding line positions as ECV for the Antarctic Ice Sheet (AIS) in key areas. Additionally, DLR’s Polar Monitor project focuses on the generation of a near complete circum-Antarctic grounding line. Until now these datasets have been derived from interferometric acquisitions of ERS, TerrasSAR-X and Sentinel-1. Especially for some of the faster glaciers, the only available InSAR observations of the grounding line have been acquired during the ERS Tandem phases (1991/92, 1994 and 1995/96).
In May 2021, a joint DLR-INTA Scientific Announcement of Opportunity was released which offers the possibility of a joint scientific evaluation of SAR acquisitions of the German TerraSAR-X/TanDEM-X and the Spanish PAZ satellite missions. These satellites are almost identical and are operated together in a constellation therefore offering the possibility of combining their acquisitions to SAR interferograms.
The present study should harness the interferometric capability of joint TSX and PAZ acquisitions in order to reduce the temporal decorrelation between acquisitions. The revisit times are reduced from 6 days (Sentinel-1 A/B) or 11 days (TSX) to 4 days (TSX-PAZ). Together, the higher spatial resolution than Sentinel-1 and the reduced temporal baseline should allow imaging the grounding line at important glaciers and ice streams where the fast ice flow causes strong deformation. These are often the glaciers where substantial grounding line migration has taken place or is suspected (e.g Amundsen Sea Sector) but where current available SAR constellations cannot preserve enough interferometric coherence to image the grounding line. The potential of short temporal baselines was already shown with data from the ERS Tandem phases in the AIS_cci GLL product and more recently but only in dedicated areas with the COSMO-SkyMed constellation [Brancato, V. et al. 2020, Milillo, P. et al. 2019]. In some fast-flowing regions, InSAR grounding lines could not be updated since.
For the derivation of the InSAR grounding line, 2 interferograms (PAZ-TSX) with a temporal baseline of 4-days will be formed. It is not necessary, that the acquisitions for the two interferograms fall in consecutive cycles but is advantageous to acquire the data with limited overall temporal separation to be able to assume constant ice velocity. The ice streams where potential GLLs should be generated were identified with focus on glaciers in the Amundsen Sea Sector (e.g. Thwaites Glacier, Pine Island Glacier) but also glaciers in East Antarctica (e.g. Totten, Lambert, Denman). Besides filling spatial or temporal gaps in the circum-Antarctic grounding line, the resulting interferograms will also be used for sensor cross-comparison to Sentinel-1-based grounding lines in areas where both constellations preserve sufficient coherence.
Brancato, V., E. Rignot, P. Milillo, M. Morlighem, J. Mouginot, L. An, B. Scheuchl, u. a. „Grounding Line Retreat of Denman Glacier, East Antarctica, Measured With COSMO-SkyMed Radar Interferometry Data“. Geophysical Research Letters 47, Nr. 7 (2020): e2019GL086291. https://doi.org/10.1029/2019GL086291.
Milillo, Pietro, Eric Rignot, Paola Rizzoli, Bernd Scheuchl, Jérémie Mouginot, J. Bueso-Bello, und P. Prats-Iraola. „Heterogeneous Retreat and Ice Melt of Thwaites Glacier, West Antarctica“. Science Advances 5, Nr. 1 (1. Januar 2019): eaau3433. https://doi.org/10.1126/sciadv.aau3433
The Northeast Greenland Ice Stream (NEGIS) extends around 600 km upstream from the coast to its onset near the ice divide in interior Greenland. Several maps of surface velocity and topography in the interior Greenland exist, but the accuracy is not well constrained by in situ observations limiting detailed studies of flow structures and shear margins near the onset of NEGIS. Here we present an assessment of a list of satellite-based surface velocity products by GPS in an area located approximately 150 km from the ice divide near the East Greenland Ice-core Project (EastGRIP) deep drilling site (75°38’ N, 35°60’ W). For the evaluation of the satellite-based ice velocity products, we use data from a GPS mapping of surface velocity over the years 2015-2019. The GPS network consists of 63 poles and covers an area of 35 km along NEGIS and 40 km across NEGIS, including both shear margins. The GPS observations show that the ice flows with a uniform surface speed of approximately 55 m a-1 within a >10 km wide central flow band which is clearly marked from the slow moving ice outside NEGIS by 10-20 m deep shear margins. The GPS derived velocities cover a range of velocities between 6 m a-1 and 55 m a-1, with strain rates in the order of 10-3 a-1 in the shear margins. We compare the GPS results to the Arctic Digital Elevation Model (ArcticDEM) and a list of 165 published and experimental remote sensing velocity products from the NASA MEaSUREs program, the ESA Climate Change Initiative, the PROMICE project and three experimental products based on data from the ESA Sentinel-1, the DLR TerraSAR-X, and USGS Landsat satellites. For each velocity product, we determine the bias and precision of the velocity compared to the GPS observations, as well as the smoothing of the velocity products needed to obtain optimal precision. The best products have a bias and a precision of ~0.5 m a-1. We combine the GPS results with satellite-based products and show that ice velocity changes in the interior of NEGIS are generally below the accuracy of the satellite products. However, it is possible to detect changes in large-scale patterns of ice velocity in interior northeastern Greenland using satellite based data that are smoothed spatially and using very long observational periods of decades, suggesting dynamical changes in the upstream legs of the NEGIS outlets. This underlines the need for long satellite based data products to monitor the interior part of the ice sheet and its response to climate change.
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites: Sentinel-3A and Sentinel-3B, launched respectively on 16 February 2016 and 25 April 2018. Among the on-board instruments, the satellites carry a radar altimeter to provide operational topography measurements of Earth’s surface. Over land ice, the main objective of the Sentinel-3 constellation is to provide accurate measurements of the polar ice sheets’ topography, in particular to support ice sheet mass balance studies. Compared to many previous missions that carried conventional pulse limited altimeters, Sentinel-3 measures the surface topography with an enhanced spatial resolution, thanks to the on-board SAR Radar ALtimeter (SRAL), which exploits delay-Doppler capabilities.
To further improve the performances of the Sentinel-3 Altimetry LAND products, ESA is developing dedicated and specialized Delay-Doppler and Level-2 processing chains over (1) Inland Waters - HY, (2) Sea-Ice - SI, and (3) Land Ice - LI areas. These so-called Thematic Instrument Processing Facilities (T-IPF) are currently under development, with an intended deployment by mid-2022. Over land ice the T-IPF will including new algorithms, in particular a dedicated delay-Doppler processing with extended window. This processing allows the recovery of a greater number of measurements over ice sheets, especially over the complex topography found across the ice sheet margins.
To ensure the missions requirements are met, ESA has set up the S3 Land Mission Performance Cluster (MPC), a consortium in charge of the assessment and monitoring of the instrument, and core product performances. In this poster, the Expert Support Laboratory (ESL) of the MPC presents a first performance assessment of the T-IPF level-2 products over land ice. In particular, the benefits of the extended window processing to better monitor the ice sheet margins is evaluated. The performance of the Sentinel-3 topography measurements are also assessed by comparison to Operation IceBridge airborne data, and to other sensors such as ICESat-2 and CryoSat-2. Once the dedicated processing chain is in place for the land ice acquisitions, the Sentinel-3 STM level-2 products will evolve and improve more efficiently over time to continuously satisfy new requirements from the Copernicus Services and land ice community.
The grounding line location (GLL) is a geophysical product of the Antarctic Ice Sheet Climate Change Initiative (AIS_cci) project and has been derived over major ice streams and glaciers around the continent through the InSAR technique. Currently the AIS_cci GLLs stretch over the period 1994 – 2020 from ERS-1/2 era to Sentinel-1 A/B.
The position of the grounding line is not constant in time. There are different processes in the grounding zone causing shifts:
• at short time scale GLL moves back and forth with the vertical movement of the floating ice induced by ocean tides. The tide amplitude depends on location and atmospheric conditions
• at long term scale GLL migration in one direction can occur. Usually a GLL retreat is expected due to ice thinning. This phenomenon is a climate change indicator.
The multitemporal AIS_cci GLLs from the ERS tandem and Sentinel-1 epochs show the short as well as the long-term migration of the grounding line. These two effects must be separated before the interpretation of grounding line retreat observed over long time periods.
The regularly Sentinel-1 acquisitions over Antarctica’s margins allow quantification of the short term GLL migration at locations with preserved coherence. Time series of individual GLLs from Sentinel-1 SAR triplets acquired at various dates within the ocean tide cycle can be processed. The associated tide levels are given by models (e.g. CATS2008) at points on the ice shelf. The short-term displacements of the grounding line need a reference to which they are calculated. We built a concave hull around the GLLs and with the support of a medial axis and lines normal to it we defined the position of the points belonging to the averaged GLL. The displacement of the individual GLLs from the average is quantified by a polygon comparison procedure. We are using a buffer around the reference GLL increased until the individual GLL is completely contained within the buffer. The overlap and histogram statistics give the final distance.
The resulting short-term horizontal displacements give interesting insights on the possible range of the tidal-induced grounding line migration and the site-specific factors influencing its magnitude over one tidal cycle. Short time series computed over entire year could reveal seasonal GLL variations due to the influx of ocean water under the ice shelf.
The averaging of the GLLs over one short period can be further used to investigate the long-term changes of the grounding line. The averaging is mainly feasible in the Sentinel-1 epoch because dense GLL time series can be derived and is less appropriate in earlier times when only single GLLs could be derived during a period short enough to exclude additional effects of ice thinning or acceleration. From ERS single and Sentinel-1 averaged GLLs we want to investigate possible grounding line retreats in the last 2.5 decades at key areas around Antarctica as signs of ice shelf instability. The surface slope, subglacial topography, ice velocity and thickness are additional parameters considered to explain why large migration occurs. The average over a short period and long-term grounding line retreat are valuable measurements contributing in the estimation of ice shelf area and area change parameters within ESA’s Polar+ Ice Shelf project.
Better understanding the global (e.g. ice mass balance, ice motion) and local (e.g. fissures and calving processes, basal melting, sea-ice interactions) dynamics of tidewater Antarctic outlet glaciers is of paramount importance to simulate the ice-sheet response to global warming. The Astrolable glacier is located in Terre Adélie (140°E, 67°S) near the Dumont d'Urville French research station. In January 2019, a large fissure of around 3km has been observed in the western shore of the glacier which could lead to a calving of ca. 28km2. The fissure has progressively grown until November 2021 when an iceberg of 20km2 was released by the glacier outlet.
The location of the glacier outlet at the proximity of the Dumont D’Urville French research station is an asset to collect in-situ measures such as GNSS surveys and seismic monitoring. Satellite optical imagery also provides numerous acquisitions from the early 1990 till the end of the 2021 thanks to the Landsat and Sentinel-2 missions.
We used two monitoring techniques: optical remote sensing and seismology to analyze changes in the activity of the glacier outlet. We computed the displacement of the ice surface with MPIC-OPT-ICE service available on the ESA Geohazards Exploitation Platform (GEP) and derived the velocity and strain rates from the archive of multispectral Sentinel-2 imagery from 2017 to end of 2021. The images of the Landsat mission are used to map the limit of the ice front to retrieve the calving cycle of the Astrolabe. We observe that the ice front is significantly advanced toward the sea (4 km) since September 2016 and such an extension is not observed in the previous years (since 2006) although minor calving episodes occurred. The joint analysis of the seismological data and the velocity and strain maps are discussed with the recent evolution of the glacier outlets. The strain maps show complex patterns of extension and compression areas with a seasonal increase during the summer months. The number of calving events detected in the seismological dataset significantly increased during 2016-2021 in comparison with the period 2012-2016. Since the beginning of 2021, both dataset show an acceleration. The number of calving events increased exponentially from June 2021 to the rupture in November 2021 and the velocity of the ice surface accelerate from 1 m.day-1 to 4 m.day-1 in the part of the glacier that detached afterward. This calving event is the first one of this magnitude documented at the Astrolabe.
In the climate system, heat is transferred between the poles and the equator through both atmospheric and oceanic circulation. One key component in transferring heat is through freshwater exchange in the Arctic, which is moderated by several elements, one of them being the outflow of freshwater from sea ice. To describe the heat transfer and possible temporal changes, it is vital to have accurate mapping of freshwater fluxes and their changes over time. Using earth observation data, volume of sea ice has been estimated using gridded sea ice thickness, sea ice concentration and ice drift velocity products, and through designated flux gates, the outflow of sea ice has been estimated. However, various sea ice thickness products exist with ranges of different methodologies and auxiliary products applied – all of this introduce differences in the estimated fluxes.
This study aims at estimating the impact that different retrieval methodologies and snow products have on the pan-Arctic sea ice thickness distribution and consequently, on the derived sea ice outflow, when using different gridded sea ice thickness product as input in the sea ice outflow computations. In this study, we utilise three different radar freeboard products derived from CryoSat-2 observations; the Threshold First Maximum Retracker Algorithm with a threshold of 50% (TFMRA50), Log-normal Altimeter Retracker Model (LARM), and Synthetic Aperture Radar (SAR) Altimetry MOde Studies and Applications over ocean (SAMOSA+). These are used to compute sea ice thickness estimates in combination with five different snow depth products: modified Warren 1999 (mW99), W99 fused with the Advanced Scanning Microwave Radiometer-2 (W99/AMSR2), SnowModel, NASA Eulerian Snow On Sea Ice Model (NESOSIM) and Altimetric Snow Depth (ASD), during the winter (Nov--Apr) of 2014--2015, resulting in 15 different sea ice thickness products for each month.
We compare the derived sea ice thickness products to investigate the differences that retrieval methodologies and snow depth products have on the sea ice distribution. Furthermore, we investigate the impact that these differences have on sea ice volume fluxes, which are further compared with outflow estimates from former studies. We also discuss how different sea ice drift estimates (based on either high-resolution SAR or passive-microwave low-resolution drift observations) and selection of fluxgate can impact the estimated volume fluxes. Finally, we derive the related freshwater fluxes and compare how the choice of the retrieval method and auxiliary data products affect the results.