FORUM will be the ESA 9th Earth-Explorer mission. It will measure from satellite the Earth’s upwelling spectral radiance in the 100-1600 cm-1 range (100 – 6.25 µm in wavelength). One of the FORUM main targets is the retrieval of surface spectral emissivity in the Far-InfRared (FIR) region, from measurements made in dry atmospheres, specifically at high latitudes and, possibly, over desert. The measurements will contribute to improve the current emissivity models and to constrain climate models in Polar regions. In clear sky conditions a joint retrieval of surface temperature, surface emissivity and vertical profiles of temperature and water vapor can be attempted.
The sensitivity of the measured spectral radiance to surface emissivity depends of the water vapor content. In dry atmospheres the sensitivity of the FIR region adds up to the sensitivity of the atmospheric windows in the Thermal InfraRed. The joint retrieval of surface temperature and surface emissivity raises however an issue, due to the fact that the dependence of the spectrum on the two variables is very similar, since the Planck function depends on temperature almost linearly for small increments. This effect can also be seen in the correlation matrix of the solution of the retrieval. In the figure, the correlation between surface temperature and emissivity parameters is plotted as a function of the emissivity wave numbers. Two cases are presented, the desert case (purple, Case 1.1) and the water case (green line, Case 2.1).
To avoid compensation errors between surface temperature and emissivity, the emissivity retrieval grid should be optimized, balancing random and smoothing errors. Moreover, in the Bayesian frame of the optimal estimation approach, the a-priori state should take into consideration the geolocation of the measurement, allowing, at the same time, for the natural atmospheric variability, by using a sufficiently large error. Also, to avoid data contamination at the edges of the spectral windows that are sensitive to the surface state, correlation lengths (i.e. off-diagonal terms in the a-priori error covariance matrix) should be kept minimal, if not completely avoided. Larger a-priori errors and the lack of correlation between adjacent a-priori values implies that the regularizing effect of the a-priori is very weak. Thus, while this approach reduces the bias towards the a-priori, the retrieved profile may show oscillations due to the ill-conditioning of the inversion of the radiative transfer equation. Therefore, an a-posteriori regularization technique should be applied to the retrieved spectral emissivity.
Based on simulated observations, in this work we illustrate the optimized constraints implemented in our FORUM inversion algorithm and assess the performance of the retrieval products.
Developed in the framework of several projects to support IASI mission [1], σ-IASI is a forward model designed for the fast calculation of radiance and its derivatives with respect to atmospheric and spectroscopic parameters of nadir-looking hyperspectral instruments [2]. The σ-IASI module is a monochromatic radiative transfer model based on a look-up table of optical depths parametrized as a polynomial concerning the atmospheric temperature and constituents. The look-up table is built based on the current LBLRTM (Line-By-Line Radiative Transfer Model) version [3], but the model can use other line-by-line models and different spectroscopic parameters. The strategy enables fast, accurate radiance and analytical derivatives calculations preserving the model flexibility that can be applied straightforwardly to all the hyperspectral instruments in Thermal Infrared.
Thanks to the flexibility with respect to the instrument peculiarities and with respect to the spectroscopic parameters, the forward model was applied successfully both for retrieval of atmospheric [4] and surface parameters applications [5] and spectroscopy validation [6] to IASI measurements and other interferometers and radiometers such as AIRS [7] (the NASA Atmospheric InfraRed Sounder), NAST-I [8] (the NPOESS Aircraft Sounding Testbed- Interferometer), and IMG [9] (the Japanese Interferometer Monitoring Greenhouse Gases), REFIR [10] (Radiation Explorer in the Far Infrared).
In this work, we present the extension of the forward model to the far-infrared. With this extension, the spectral coverage goes from 5 to 3000 cm-1 and, therefore, is suited for IASI-NG [11] and the FORUM [12] (Far-infrared Outgoing Radiation Understanding and Monitoring) instrument onboard the 9th ESA explorer mission. FORUM is expected to cover the range 100 to 1600 cm-1, that is, the far-infrared spectral region of the Earth emission spectrum, which is of paramount interest for water vapour and cirrus cloud processes affecting climate and global warming.
The present version of the model considers ice and water clouds and aerosols by representing their multiple scattering and absorption properties with an improved, analytical parameterization of the so-called Chou approximation. Thanks to this original parameterization [13], the new version of σ-IASI is the only fast-forward model capable of computing analytical Jacobian derivatives with respect to ice and water content concentrations and respect to the effective radius. Thus, the new σ-IASI model yields for the retrieval of cloud microphysical properties.
References
[1] F. Hilton et al., «Hyperspectral Earth Observation from IASI: Five Years of Accomplishments», Bull. Am. Meteorol. Soc., vol. 93, n. 3, pagg. 347–370, mar. 2012, doi: 10.1175/BAMS-D-11-00027.1.
[2] U. Amato, G. Masiello, C. Serio, M. Viggiano, «The σ-IASI code for the calculation of infrared atmospheric radiance and its derivatives», Environ. Model. Softw., vol. 17, n. 7, pagg. 651–667, nov. 2002, doi: 10.1016/S1364-8152(02)00027-0.
[3] S. A. Clough et al., «Atmospheric radiative transfer modeling: a summary of the AER codes», J. Quant. Spectrosc. Radiat. Transf., vol. 91, n. 2, pagg. 233–244, mar. 2005, doi: 10.1016/j.jqsrt.2004.05.058.
[4] C. Serio, G. Masiello, G. Liuzzi, «Demonstration of random projections applied to the retrieval problem of geophysical parameters from hyper-spectral infrared observations», Appl. Opt., vol. 55, n. 24, pag. 6576, ago. 2016, doi: 10.1364/AO.55.006576.
[5] G. Masiello, C. Serio, S. Venafra, G. Liuzzi, L. Poutier, F.-M. Göttsche, «Physical Retrieval of Land Surface Emissivity Spectra from Hyper-Spectral Infrared Observations and Validation with In Situ Measurements», Remote Sens., vol. 10, n. 6, pag. 976, giu. 2018, doi: 10.3390/rs10060976.
[6] G. Liuzzi, G. Masiello, C. Serio, S. Venafra, C. Camy-Peyret, «Physical inversion of the full IASI spectra: Assessment of atmospheric parameters retrievals, consistency of spectroscopy and forward modelling», J. Quant. Spectrosc. Radiat. Transf., vol. 182, pagg. 128–157, ott. 2016, doi: 10.1016/j.jqsrt.2016.05.022.
[7] R. Saunders et al., «A comparison of radiative transfer models for simulating Atmospheric Infrared Sounder (AIRS) radiances», J. Geophys. Res., vol. 112, n. D1, pag. D01S90, gen. 2007, doi: 10.1029/2006JD007088.
[8] G. Grieco, G. Masiello, M. Matricardi, C. Serio, D. Summa, V. Cuomo, «Demonstration and validation of the φ-IASI inversion scheme with NAST-I data», Q. J. R. Meteorol. Soc., vol. 133, n. S3, pagg. 217–232, dic. 2007, doi: 10.1002/qj.162.
[9] A. M. Lubrano, G. Masiello, M. Matricardi, C. Serio, V. Cuomo, «Retrieving N 2 O from nadir-viewing infrared spectrometers», Tellus B Chem. Phys. Meteorol., vol. 56, n. 3, pagg. 249–261, gen. 2004, doi: 10.3402/tellusb.v56i3.16418.
[10] C. Serio et al., «Retrieval of foreign-broadened water vapor continuum coefficients from emitted spectral radiance in the H_2O rotational band from 240 to 590 cm^-1», Opt. Express, vol. 16, n. 20, pag. 15816, set. 2008, doi: 10.1364/OE.16.015816.
[11] C. Crevoisier et al., «Towards IASI-New Generation (IASI-NG): impact of improved spectral resolution and radiometric noise on the retrieval of thermodynamic, chemistry and climate variables», Atmospheric Meas. Tech., vol. 7, n. 12, pagg. 4367–4385, dic. 2014, doi: 10.5194/amt-7-4367-2014.
[12] M. Ridolfi et al., «FORUM Earth Explorer 9: Characteristics of Level 2 Products and Synergies with IASI-NG», Remote Sens., vol. 12, n. 9, pag. 1496, mag. 2020, doi: 10.3390/rs12091496.
[13] M. Martinazzo, D. Magurno, W. Cossich, C. Serio, G. Masiello, T. Maestri, «Assessment of the accuracy of scaling methods for radiance simulations at far and mid infrared wavelengths», J. Quant. Spectrosc. Radiat. Transf., vol. 271, pag. 107739, set. 2021, doi: 10.1016/j.jqsrt.2021.107739.
Over the last 3 decades, the study and the analysis of cloud microphysics have received increasing attention to better understand cloud feedbacks on climate. Today, satellite remote sensing is the leading approach to infer cloud properties [1]–[3]. Using optical instruments, operating in the visible and infrared bands, a variety of tools have been developed, to retrieve cloud microphysics properties such as the effective radius of water and ice clouds [4]–[6].
The forthcoming launch of new advanced high spectral resolution satellite sensors, such as the Next Generation Atmospheric Sounding Interferometer (IASI-NG) [7], promises to provide more accurate estimates of cloud microphysical parameters. Because of the high dimensionality of the data space, innovative processing methods are needed, such as those based on the use of Artificial Intelligence (AI) and related tools, which can run on large databases and can handle hundreds of input variables without variable deletion.
In this work, we present an assessment of the performance of a statistical regression scheme for the effective radius of liquid and ice water clouds. The tool has been implemented through the use of a random forest (RF) regressor [8], [9]. RFs have been applied in many research areas, including those related to remote sensing of clouds [10], e.g. cloud detection and classification, because of their ability to use all potentially predictive features. The methodology has been trained and validated with a set of simulated IASI-NG L1C observations, covering the global scale. ERA5-ECMWF atmospheric and surface variables are used as "state vector", from which simulated IASI-NG observations are obtained with the state of art σ-IASI forward model. A regression framework based on the principal components analysis (PCA) of IASI-NG radiances to input the RF regressors has been implemented. The choice of the optimal number and order of the input principal components or scores has been performed by exploiting the RF methodology, which operates by constructing a multitude of decision trees at training time ad outputting what variables are most important in the regression. Using this framework, the supervised learning of liquid and ice water clouds' effective radii was carried out. As a conclusion, the regression analysis shows good agreement between reference and retrieved effective radius, with 80% correlation and root-mean-square error (RMSE) of 0.78 for liquid and 1.15 for ice cloud effective radius.
REFERENCES
[1] S. Fritz e J. S. Winston, «SYNOPTIC USE OF RADIATION MEASUREMENTS FROM SATELLITE TIROS II», Mon. Weather Rev., vol. 90, n. 1, pagg. 1–9, gen. 1962
[2] W. L. Smith e C. M. R. Platt, «Comparison of Satellite-Deduced Cloud Heights with Indications from Radiosonde and Ground-Based Laser Measurements», J. Appl. Meteorol. Climatol., vol. 17, n. 12, pagg. 1796–1802, dic. 1978.
[3] W. P. Menzel, W. L. Smith, e T. R. Stewart, «Improved Cloud Motion Wind Vector and Altitude Assignment Using VAS», J. Appl. Meteorol. Climatol., vol. 22, n. 3, pagg. 377–384, mar. 1983.
[4] T. Nakajima e M. D. King, «Determination of the Optical Thickness and Effective Particle Radius of Clouds from Reflected Solar Radiation Measurements. Part I: Theory», J. Atmospheric Sci., vol. 47, n. 15, pagg. 1878–1893, ago. 1990.
[5] Q. Han, W. B. Rossow, e A. A. Lacis, «Near-Global Survey of Effective Droplet Radii in Liquid Water Clouds Using ISCCP Data», J. Clim., vol. 7, n. 4, pagg. 465–497, apr. 1994.
[6] H. Iwabuchi, M. Saito, Y. Tokoro, N. S. Putri, e M. Sekiguchi, «Retrieval of radiative and microphysical properties of clouds from multispectral infrared measurements», Prog. Earth Planet. Sci., vol. 3, n. 1, pag. 32, ott. 2016.
[7] C. Crevoisier et al., «Towards IASI-New Generation (IASI-NG): impact of improved spectral resolution and radiometric noise on the retrieval of thermodynamic, chemistry and climate variables», Atmospheric Meas. Tech., vol. 7, n. 12, pagg. 4367–4385, dic. 2014.
[8] L. Breiman, «Random Forests», Mach. Learn., vol. 45, n. 1, pagg. 5–32, ott. 2001, doi: 10.1023/A:1010933404324.
[9] T. K. Ho, «Random decision forests», in Proceedings of 3rd International Conference on Document Analysis and Recognition, ago. 1995, vol. 1, pagg. 278–282 vol.1. doi: 10.1109/ICDAR.1995.598994.
[10] M. Belgiu e L. Drăguţ, «Random forest in remote sensing: A review of applications and future directions», ISPRS J. Photogramm. Remote Sens., vol. 114, pagg. 24–31, apr. 2016, doi: 10.1016/j.isprsjprs.2016.01.011.
The Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, in preparation for the launch as ESA’s 9th Earth Explorer (EE9), will be launched in 2027. For the first time, spectrally resolved radiance observations covering the Far InfraRed (FIR) band from 100 to 667 cm ⁻¹, as well as the Middle InfraRed (MIR) from 667 to 1600 cm ⁻¹, with global coverage, high spectral resolution, and radiometric accuracy will be available. FORUM spectral region covers two emission bands of the water vapour isotopologue HDO (roughly 100-600 cm ⁻¹ and 1000-1600 cm ⁻¹) which can be used for the retrieval of its vertical profile.
A deep knowledge of HDO is necessary to assess and improve the representation of water vapour-related processes. In particular, allowing to couple atmospheric circulation and precipitation components in climate models, knowledge of water vapor isotopologues can significantly improve numerical weather predictions. Indeed, it was proven [1] that the assimilation of water vapour isotopologues improves wind, humidity, and temperature fields predictions in the middle-tropophere by more than 10%.
We carried out a feasibility study to evaluate the capability of FORUM measurements to provide information on H₂¹⁶O and HDO profiles. Investigations have been also targeting the δD parameter, which represents the fractional deviation (‰) of the HDO/ H₂¹⁶O ratio from the standard reference ratio. Since FORUM will fly in loose formation with IASI-NG, the improvements achieved when matching measurements of the two sensors are synergistically exploited have also been assessed. For this study, both synthetic observations, and retrievals were performed using the KLIMA code developed at IFAC-CNR in Florence (Italy). The study involves several atmospheric scenarios and exploits both individual and averaged observations. The quality of the retrieval is evaluated in terms of degrees of freedom, precision, and accuracy.
Finally, we applied our analysis system to FORUM-like measurements such as those acquired by REFIR-PAD (Radiation Explorer in the Far InfraRed-Prototype for Applications and Development) during a balloon campaign from Teresina (Brasil, June 2005), and those acquired from ground-based stations by FIRMOS (Far-Infrared Radiation Mobile Observation System) during the Zugspitze campaign (2019).
[1] Kinya Toride, Kei Yoshimura, Masataka Tada, Christopher Diekmann, Benjamin Ertl, Farahnaz Khosrawi, and Matthias Schneider, Potential of Mid-tropospheric Water Vapor Isotopes to Improve Large-Scale Circulation and Weather Predictability, Geophysical Research Letter, doi 10.1029/2020GL091698, 2021
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is an infrared imaging FTS with a 2-D detector operated on two high flying research aircraft, the German HALO and the Russian Geophysica, and, most recently, stratospheric balloons. It has flown on eight scientific campaigns and successfully collected two- and three-dimensional data of infrared radiance and, after (tomographic) level 2 processing, temperature, water vapor, ozone, (H)-CFCs, biomass burning tracers, cirrus clouds, etc., along more than 300 000 km of flight track. GLORIA also serves as a demonstrator for the Earth Explorer 11 mission CAIRT.
This poster details our instrument calibration and characterization efforts, which in particular leverage almost exclusively in-flight data. We present the framework of our new calibration scheme, which uses information from all three available calibration measurements (two temperature controlled blackbodies and upward pointing "deep space" measurements). Part of this scheme is a new correction algorithm that leverages spatial information from the 2-D images to correct for the erratically changing non-linearity of a subset of detector pixels and the identification of remaining bad pixels.
Using this new calibration, we derive a 1-σ bound of 1 % on the instrumental gain error and a bound of 30 nW cm−2 sr−1 cm on the instrumental offset error. We show how we can examine the noise and spectral accuracy for all measured atmospheric spectra and derive a spectral accuracy of 5 ppm, on average. All these errors are compliant with the initial instrument requirements. We also discuss the pointing system of the GLORIA instrument. Combining laboratory calibration efforts with the measurement of astronomical bodies during the flight, we can derive a pointing accuracy of 0.032°, which corresponds to one detector pixel.
We conclude with a brief study on how these newly characterized instrumental parameters affect temperature and ozone retrievals. We find that, first, the pointing uncertainty and, second, the instrumental gain uncertainty introduce the largest error in the result.
Knowledge of the emissivity of the Earth’s surface is vitally important for the prediction of future climate. The emissivity of important surface types, including ocean, ice, and desert, are well known in the mid-infrared (wavelengths 8 to 15 microns) however very few measurements have been made of emissivity in the far-infrared (wavelengths longer than 15 microns). Recent modelling studies carried out with theoretical emissivity values have indicated that surface emissivity in the far-infrared can have a significant impact on the surface temperature reported by global climate models and that including realistic emissivity values can reduce observed biases particularly in polar regions. Far-infrared emissivity measurements will also be needed to validate emissivity retrievals made by the European Space Agency’s Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission. FORUM is ESA’s 9th Earth Explorer satellite and will make spectrally resolved measurements of the outgoing longwave radiation in the far-infrared for the first time.
To address this lack of measurements we have combined a new front-end calibration and automated scene view system with an extended mid-infrared commercial Bruker EM27 spectrometer covering the spectral range 400 – 2000 cm-1. The purpose of the instrument is to perform in-situ measurements of emissivity in the mid- and far-infrared concentrating particularly on snow and ice surfaces. In this presentation, we will introduce the instrument and calibration system. We will show retrievals of mid-to-far-infrared emissivity for pure water derived from radiance measurements. We will also discuss the overall sensitivity and calibration uncertainties relative to theoretical models.
Wildfires are a cause for growing concern for air quality and the climate. They represent a large source of emissions of gas-phase and particulate species. As a consequence of global warming, the spatial extent, severity and duration of wildfires is likely to increase, with current indications suggesting that this may already be happening. Wildfires are a truly global phenomenon, with largescale occurrences on each continent besides Antarctica.
As an emission source, wildfires are highly complex, with a large variety of primary and secondary particulates and gases. Particulates are not expected to be long-lived, and so have an important, yet mostly short-term impact on climate and air quality. Conversely, the gas-phase species exhibit a range of lifetimes. Some are very short-lived such as furans, polyenes and other unsaturated species, which possess lifetimes of less than a day. Others could be long-lived, such as hydrogen cyanide (HCN), isocyanic acid (HNCO) and acetonitrile (CH3CN), which can persist for several years. Some species, such as the peroxyacyl nitrates, possess short lifetimes in the boundary layer, but may become long-lived as they are lofted.
The main factors that govern the persistence of these chemicals are their reaction rates with atmospheric oxidants, their photolysis rates and their wet deposition rates. The detection and quantification of these molecules using remote sensing techniques will depend on their spectral characteristics and the availability of infrared absorption cross sections. In each case, it is therefore necessary to obtain high-quality laboratory data on the chemical behaviour and spectroscopy of wildfire-related chemicals if we are to understand their impact and detect them in a quantitative way. Such data are not currently available in some cases, and the purpose of this presentation will be to assess the extent to which major classes of wildfire chemicals are covered by existing laboratory data, and where important gaps remain.
The UK Earth System Model was the UK's contribution to CMIP6, an advanced climate model that includes a range of coupled atmospheric, oceanic and land interactions including aerosol processes (doi:10.1029/2019MS001739). The quality of the aerosol representation within a model is traditionally evaluated against aggregates of satellite and/or ground-based data (e.g. MODIS and AERONET), by which the UKESM has been shown to perform reasonably (doi:10.5194/gmd-13-6383-2020). The work of N. Schutgens and others has outlined methods of comparison that account for the fundamental difference between a model's grid-cell average and the average of observations that are affected by noise and contamination (summarised in doi:10.5194/acp-20-12431-2020), but this often requires high-resolution model output that is expensive to produce and manipulate.
This talk outlines a different method of comparison, intended to evaluate standard model outputs, that relies on a statistical analysis of a range of independent satellite datasets to quantify the distribution of aerosol optical depth at two wavelengths. The method of fitting histograms will be introduced through an examination of the evidence for aerosol optical depth to be log-normally distributed (such that geometric means are more appropriate than arithmetic) and Angstrom exponent to be normally distributed (though that can be skewed by unbalanced uncertainties in the underlying observations). Several of the most climatologically important aerosol regimes, such as the Saharan dust outflow, are found to require multiple modes of aerosol to explain the long-term behaviour and the evaluation method proposed better captures that variability than other analyses. The advantages of multi-wavelength evaluation, as opposed to the Angstrom exponent, will be outlined. The data ensemble generated will be made publicly available for use throughout the aerosol modelling community.
The presentation will conclude with an evaluation of aerosols in the UKESM, comparing results using the proposed method to traditional and representivity-aware evaluation techniques to highlight the nature of errors that each can identify and the areas that observational data cannot adequately constrain at the moment.
We compare top-of-atmosphere (TOA) clear-sky reflected shortwave (SW) fluxes observed by the Clouds and the Earth's Radiant Energy System (CERES) and simulated by nine AeroCom models participating in the phase III control experiment. We also compare aerosol optical depths (AOD) and land surface albedos from these models with satellite products to understand the causes for the SW flux bias. Radiative kernels of AOD and land surface albedo are used to quantify their corresponding contributions to the SW flux bias. Over ocean, AOD contributes about 25% to the 60°S–60°N mean SW flux bias for the multi-model mean (MMM) result. Over land, AOD and land surface albedo contribute about 40% and 30%, respectively, to the 60°S–60°N mean SW flux bias for the MMM result. Furthermore, the spatial patterns of the SW flux biases derived from the radiative kernels are very similar to those between models and CERES observation, with the correlation coefficient of 0.6 over ocean and 0.76 over land for MMM using data of 2010. High correlations also exist for all models considered in this study, and the agreements with CERES TOA SW flux improve for most models after accounting for the contributions of AOD and land surface albedo to TOA SW flux biases. Satellite data used in this evaluation are derived independently from each other, consistencies in their bias patterns when compared with model simulations suggest that these patterns are robust. This highlights the importance of evaluating related variables in a synergistic manner to provide an unambiguous assessment of the models, as results from single parameter assessments are often confounded by measurement uncertainty. Model biases in land surface albedos can and must be corrected to accurately calculate TOA flux. We also compare the AOD trend from three models with the observation-based counterpart. These models reproduce all notable trends in AOD ( (i.e. decreasing trend over eastern United States and increasing trend over India) except the decreasing trend over eastern China and the adjacent oceanic regions due to limitations in the emission data set.
Several studies have shown that aerosol retrieval from satellites is strongly affected by cloud contamination errors and cloud enhancement. In the transition zone between clouds and cloud-free air, cloud enhancement leads to an increase of aerosol optical thickness and to changes in the aerosol particle size. The choice of the cloud mask to be used in aerosol retrieval (AOT) applications is thus critical.
The ESA Aerosol-CCI project showed the effect of using a common cloud mask to different aerosol retrieval algorithm as well as the impact of increases the “safety zone” around clouds to reduce possible enhancement effects (Holzer-Popp et al., 2013). In the merging exercise lead by Larissa Sogacheva (Sogacheva et al., 2020) 16 monthly AOT products were combined and different the cloud masks used by the selected algorithms were identified as one of the sources of discrepancies among the AOT products.
To avoid cloud contamination effects, the cloud mask in aerosol retrieval application is usually conservative, and the safety zone around clouds extends for few kilometers. For instance, in one of the Aerosol-CCI experiments, pixels at less than 10 km distance from clouds are discarded by the aerosol retrieval algorithms. This results in a poor AOT spatial coverage. Cloud effects on aerosol retrieval become more important at higher resolution, e.g. 1km, where 3D effects can no longer be neglected.
Over the past few years, efforts have been made to reduce the high bias in AOT retrieval due to cloud enhancement while improving the aerosol spatial coverage. In 2017, Sogacheva et al. developed a cloud post-processing (CPP) method to remove residual clouds from aerosol optical depth. Applying the CPP to AATSR aerosol retrieval, the AOT spatial coverage increases by 10-15%. Nevertheless, the algorithm shows some limitation in discriminating high AOTs from cloud, especially for local aerosol events.
A new method has been developed to address some cloud-related issues in aerosol retrieval. The CISAR algorithm (Govaerts and Luffarelli, 2018; Luffarelli and Govaerts, 2019) has been extended to the retrieval of cloud optical properties to overcome the need of an external cloud mask. After a so-called training period, the CISAR algorithm process all available satellite observations, i.e. both cloudy and cloud free air. The new CISAR version, developed in the framework of the ESA SEOM CIRCAS and ESA aerosol-CCI+ projects, has been applied to S3A/SLSTR observations aggregated at 10 km. Aerosols are retrieved in the vicinity of clouds as well as within optically thin clouds, assuring a larger spatial coverage than traditional aerosol retrieval algorithm.
Several constraints are applied on the temporal, spatial and spectral variability of the state variables (surface reflectance model parameters, aerosol and cloud single scattering properties) to balance the information coming from the observation. The latter is quantitively analysed through the Jacobian, the partial derivative of the signal with respect to the state variables. In the case of S3A/SLSTR observations this analysis will show different results in the Northern and Southern Hemisphere, due to the viewing geometry of the oblique camera.
The aerosol product is evaluated against ground observations (AERONET) and cross-sensor measurements (MODIS). The results show good temporal and spatial agreement and no systematic bias. Case studies will be shown to highlight CISAR improved spatial coverage and its capability of discriminating aerosols from clouds.
Aerosols are fine suspended particulates in the atmosphere and are of great importance in regional and climate studies. Although, in the recent decades a wealth of information has been brought up by the earth observation satellites for better understanding the spatiotemporal variability of aerosols, there remains still some disparities between the studies. In the current context, the SYNergy AOD (SYN AOD) product, from the Copernicus Sentinel-3 mission, gains prominence as it is produced by exploiting the spectral and angular information from Level-1 data of OLCI and SLSTR instruments. It is achieved by a dedicated processor that exploits the synergy of OLCI Level-1 Full resolution products and SLSTR Level-1 Radiance products. An attempt is made in the current study to understand the spatial variation of the SYN AOD product with respect to other satellite products (MODIS collection 6.0) for the observations carried over the climatologically different parts of the globe. Also, within the framework of OPT-MPC, setup by ESA, development activities are carried out for the improvement of the SYN AOD processor. These activities are done by taking lessons from MODIS AOD retrievals. The study presents the first results of satellite AOD intercomparison exercise along with the results from the development activities. Finally, the future plans for the SYN AOD project will be presented.
In the 2020s, a number of satellites with Multi-Angle Polarimetric (MAP) instruments will be launched, such as METOP-SG from ESA/EUMETSAT with the 3MI polarimeter, the NASA PACE mission with onboard the SPEXone and HARP-2 polarimeters, the ESA CO2M-mission, and in the late 2020s the NASA ATMOS mission. MAP measurements provide the highest information content on aerosol properties from a passive remote sensing point of view, and allow accurate retrieval of aerosol optical properties (optical thickness, single scattering albedo, phase function) and microphysical properties (size distribution, refractive index, shape), needed for climate and air quality research. So, the expectation is that the quality of aerosol remote sensing measurements will advance significantly in the coming years. To cope with the increased information content of MAP instrumentation advanced retrieval algorithms need to be (further) developed. Here, full inversion approaches are needed that consider a continuous space of aerosol microphysical properties (size distribution, refractive index), instead of using standard aerosol models, and to properly account for land or ocean reflection by retrieving land or ocean parameters simultaneously with aerosol properties. Currently, there are two full inversion algorithms that have proven capability at a global scale: the Generalized Retrieval of Aerosol and Surface Properties (GRASP) algorithm, developed at University of Lille and the GRASP-SAS company, and the Remote Sensing of Trace gas and Aerosol Products (RemoTAP) algorithm, developed at SRON Netherlands Institute for Space Research. Currently, the ESA HARPOL (Harmonizing and advancing retrieval approaches for present and future polarimetric space-borne atmospheric missions) project is ongoing with the following objectives: 1) Identify strong/weak points for both GRASP and RemoTAP through intercomparison of aerosol/surface products and harmonized validation. 2) Define the optimal set of aerosol properties to be retrieved from MAP measurements. 3) Define the optimal aerosol, land surface and ocean reflection models to be used in MAP retrievals. 4) Define the (possible) need for prior information. 5) Understand the intrinsic limitations of aerosol retrieval from past and upcoming MAP measurements. 6) Provide recommendations for future space born missions dedicated to atmospheric aerosol studies. Here, we report on the first results from the HARPOL project, based on both real measurements from POLDER-3/PARASOL as well as synthetic measurements.
Uncertainties in the global variation of aerosol hygroscopicity lead to uncertainties in model predicted aerosol direct and indirect effects. Furthermore, biases in hygroscopicity may translate to biases in emissions estimated through the model-assimilation of observed aerosol optical depth. Water-soluble aerosols generally swell with increasing relative humidity according to their hygroscopicity, which may depend on their dry size, type and history. Furthermore, the fraction of aerosol that is insoluble varies substantial per, e.g., location and source. To provide observational constraints on aerosol hygroscopicity, here we present a remote sensing method to infer the water volume fraction in fine mode aerosol, in addition to an estimate of the fraction of insoluble particles. The method makes use of multi-angle polarimetric observations that allow retrievals of aerosol optical depth, effective radius and variance, as well as their complex refractive index. Earlier studies indicate that the refractive index of soluble mixtures may be well approximated by the volume-weighted averages of the respective refractive index of water and the dry aerosol. Furthermore, the dry real part of refractive indices of common aerosol species are observed to be rather similar. Hence, the retrieved refractive index of aerosol at ambient conditions may be used to infer the volume fraction of absorbed water. To evaluate this remote sensing concept, we apply it to observations of the airborne Research Scanning Polarimeter (RSP) obtained during various field campaigns and compare the result to collocated in situ measurements of aerosol water fraction. The inferred water fraction generally agrees with the in situ observations within the estimated uncertainties. Furthermore, the retrieved fine mode effective radius scales with water fraction as expected. In addition, the effective variance of the size distribution generally is observed to increase with water fraction, which indicates the presence of external mixtures of insoluble particles. We show that the relative rates of increase of effective variance and effective radius with water fraction within an observed region or time period may be used to estimate the fraction of insoluble aerosol. Finally, retrieved wet size distributions and water fractions allow inference of dry aerosol size distributions, which are shown to compare favorably with collocated in situ measurements. Further evaluation on simulated data will also be discussed. The upcoming accurate multi-angle space-borne polarimeters, such as SPEXone on the NASA’s PACE satellite (NASA), the MAP on CO2M (ESA) and 3MI on METOP-SG (ESA), provide the exciting prospect of applying the proposed method on a global scale to infer aerosol water fraction, soluble fraction, wet and dry size distributions, in addition to aerosol optical depth, number concentrations and layer height.
The Polar Multi-sensor Aerosol product (PMAp) provides satellite-derived measurements of aerosol optical depth (AOD) at 550 nm globally over ocean and continents on a daily basis. Further, aerosol type and other aerosol related parameters, e.g. cloud optical thickness (COT) are provided in this L2 product. PMAp is derived from a synergistic use of three instruments: GOME-2, AVHRR, and IASI on-board Metop series of satellites from the EUMETSAT Polar System (EPS). The synergistic approach exploits the rich spectral (UV-TIR) content provided by the three sensors, the high spatial resolution of AVHRR and measurements of un-polarized and polarized reflectance of GOME-2 [1]. This combination besides a computationally efficient colocation algorithm allows a refined detection of cloud, sub-pixel clouds and aerosol properties in near real time (NRT), which is a unique feature for a multi-sensor product.
PMAp is developed at EUMETSAT and uses the operational infrastructures at ground segment to retrieve the aerosol properties in near real time, which is disseminated to users, e.g. the Copernicus Atmosphere Monitoring Service (CAMS), maximum three hours after the sensing time. The first version of PMAp (over ocean) was released by EUMETSAT in April 2014. Later on, the AOD retrieval over land was added to the product in April 2016. After several updates of the retrieval algorithm, a new version of PMAp (v2.2) has been released in May 2021.
The new version of PMAp brings the following enhancements: i) significant improvement of the retrieval over Land, ii) improvement of the consistency between Metop-A, B, and C over ocean. These changes specifically include: 1) a dust detection scheme in the PMAp pre-classification step exploiting IASI measurements, 2) a solution for the 'hotspot' issue in retrieved AOD (resulting from clouds misidentified as aerosol), 3) updated surface reflectance database to be compatible with the LER database derived from both Metop-A and Metop-B, 4) implementing angular dependency of LER, 5) integrating degradation correction procedure for GOME-2 PMD Level 1b data for Metop-A, B and C to account for the aging of the sensor, 6) additional radiometric adjustment of GOME-2 PMD-P radiances for Metop-A, B and C. More details can be found in the PMAp documents publicly available here https://www.eumetsat.int/new-version-metop-pmap-product-released-soon.
The validation studies using ground-based measurements, comparison to other satellite AOD products (e.g. MODIS, VIIRS, S3) and assimilation of PMAp AOD in CAMS forecast system (besides MODIS), all indicate an overall good performance of PMAp, which prove that the concept behind PMAp can be used as a baseline for the development of new generation of synergy AOD products from EPS-SG [2]. The Multi-sensor Aerosol Product (MAP) will be the follow-on product of PMAp extended to the remarkable capabilities of the platform. A hyper-instrument from EPS-SG can be created using the synergy of 3MI, METimage, Sentinel-5, and IASI-NG instruments. This significantly richer dataset from EPS-SG offers the unique opportunity to improve the performance of the retrieval algorithm and to derive more parameters characterising aerosols.
References:
[1] Grzegorski, M, Poli, G., Cacciari, A., Jafariserajehlou, S., Holdak, A., Lang, R., Vazquez-Navarro, M., Munro, R., Fougnie, B., Multi-sensor Retrieval of Aerosol Optical Properties for Near-Real-Time Applications Using the Metop Series of Satellites: Concept, Detailed Description and First Validation, submitted to Remote Sensing journal, 2021.
[2] Schlüssel, P., and G. Kayal, “Introduction to the next generation EUMETSAT Polar System (EPS-SG) observation 1019 missions”. In: Proc. SPIE 10423, Sensors, Systems, and Next-Generation Satellites XXI, 10423; 1–16, 2017.
Atmospheric aerosols are an important part of the complex physical–chemical processes that impact Earth’s climate. They are crucial to the Earth’s system but, because of the huge variability in their properties their effects are not very well understood. An upcoming French-US collaborative satellite mission called AOS (Atmospheric Observatory System) is planned by the NASA and CNES which is supposed to carry next generation lidars and polarimeters. These spaceborne lidars onboard AOS, and their synergisms with polarimeter, will potentially offer unprecedented capabilities to observe aerosols from space. These active instruments will probe aerosols at two wavelengths, measuring aerosol backscatter and depolarization profiles, and for one of the instruments also high spectral resolution.
In this study, we have assessed the new capabilities to observe aerosol from the lidars onboard AOS. A full nature run experiment is implemented. For this, the synthetic lidar measurements is generated using MOCAGE chemistry-transport model as pseudo-reality describing the global 3D distributions of several aerosol species. The aerosol distributions are sampled following a typical polar orbit to obtain transects of aerosol vertical profiles. After that, we add natural variability to the microphysical and optical properties of each aerosol species. The magnitude of this variability is like that observed by real AERONET ground-based sun photometer retrievals at different locations. Then, a forward model simulates aerosol vertical profiles measured by two lidars and a polarimeter. These two sets of Lidars are lidar05 (2β+1α+2δ), and lidar09 (2βT2+2δ). The β, α and δ are backscatter and extinction coefficients, and depolarization ratio respectively. To make the simulations for these set of lidars close to reality we add random noise which depends on the measuring conditions (night, day, spatial resolution). Finally, this synthetic lidar measurements is used as an input in the retrieval using the GRASP approach. This method derives the vertical profiles of abundance of each aerosol type along the transects measured by the spaceborne lidar, which also allows bulk estimations of the aerosol profiles of concentration, size and optical properties.
The results show that the lidar with high spectral resolution performs far better than the backscatter lidar. This instrument enables the quantification of the vertical profile of several aerosol types. On the other hand, the lidar without high spectral resolution is unable to discriminate the abundance of each aerosol type, except for desert dust. As expected, daytime noise degrades slightly the performance of the retrievals but a similar relative performance of the lidars is obtained.
We also assess the gain of performance from synergy between lidar and polarimeter. The two AOS lidars are used in combination with polarimeter measurements. The comparison has been carried out between lidar only retrieval and the lidar, polarimeter synergy.
Through their direct as well as indirect effects on clouds, aerosols are key players in the radiative budget of the atmosphere - especially in the upper troposphere and stratosphere. The aerosol content of this altitude range is effected by natural as well as anthropogenic influence, like volcanic eruptions, biomass burning with pyro-convection, convection in the monsoons lifting pollution upward from the boundary layer, and even possible human interventions through climate engineering measures/experiments.
To better understand aerosol processes at these altitudes it is important to monitor in addition to the aerosols’ optical depth also further properties, like composition and volume density as well as to quantify aerosol precursor gases with adequate (horizontal as well as vertical) spatial resolution.
In recent years it has been shown that infrared limb-emission spectral observations allow to derive vertically resolved profiles of different kinds of secondary aerosols together with their precursor gases.
Here we will provide an overview of observations obtained from two space-based instruments, the Michelson Interferometer for Passive Atmospheric sounding (MIPAS/Envisat) and the Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) as well as airborne observations with GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) - the demonstrator of the EE11 candidate CAIRT (The Changing-Atmosphere InfraRed Tomography Explorer).
In the stratosphere, we will concentrate on the observations of sulfuric acid aerosols as well as their precursors, SO2 and OCS. In the upper troposphere, through IR-spectroscopy with GLORIA, it was even possible to identify and quantify an aerosol species which was not expected to exist there at all: solid ammonium nitrate particles as a major part of the Asian Tropopause Aerosol Layer (ATAL). These particles have been shown to serve as a very effective ice nuclei, thus, having the capability to influence cirrus clouds. The aerosol measurements have been complemented by observations of unprecedented concentrations of the ammonium nitrate precursor ammonia (NH¬3) in the upper troposphere also by GLORIA – a gas which can up to now only be measured by spectroscopic infrared remote sensing at these altitudes. In combination with IASI observations it was possible to trace the high-altitude observation of NH3 down to the ground sources.
Finally, we will present first assessments of the envisaged capabilities of CAIRT with respect to aerosol and precursor species. Due to its strongly increased vertical and horizontal sampling compared to its predecessors and flying in formation with IASI-NG, CAIRT is expected to gain much deeper insight into aerosol processes from the mid-troposphere to the upper stratosphere.
An ensemble Kalman-filter smoother (LETKS) is used to estimate aerosol emission in the global climate/aerosol model ECHAM-HAM by assimilating retrievals from the multi-angle polarimeter POLDER. The assimilated observations (aerosol optical depth, angstrom exponent and single scattering albedo) provide a wealth of information in order to correct the aerosol amount, size and composition simultaneously. The emissions are estimated per species (dust, sea salt, organic carbon, black carbon, sulfates and sulfate precursor gases), per sector (biomass burning and fossil fuel) and by size (Aitken, Accumulation and Coarse).
Evaluating ECHAM-HAM and other global climate models (AEROCOM III and CMIP6) with ERA5 reveals that relative humidity (RH) is overestimated in the lower Troposphere. Based on ECHAM-HAM sensitivity studies, the overestimated relative humidity inflates water uptake and leads to an increase of aerosol optical depth by 19%. In order to quantify the effect of the overestimated relative humidity on the emission estimation, we conduct two set of experiments. The first simulates water uptake according to ECHAM-HAM relative humidity, while the second based on ERA5 relative humidity. Our data assimilation results uncover that Sea Salt and especially Sulfates precursor emissions should be significantly higher when water uptake is modeled according to ERA5 relative humidity. Further, this increase coincides with densely populated areas (China, Europe, North America) where the anthropogenic emissions of sulfur dioxide are high. This indicates that the anthropogenic emission in the current emission inventories are underestimated.
Estimation of fuel load is the step introducing the greatest uncertainty in fire emissions inventories. Most fire emission models adopt a ‘bottom-up’ approach, in which estimates of burned biomass are generated from remote observation of burned area (BA), active fire (AF) counts and/or fire radiative power (FRP). These burned biomass estimates are multiplied by biome-specific emission factors to convert each kilogram of burned dry matter to the amount of a trace gas or aerosol released into the atmosphere. Both the Global Fire Assimilation System (GFAS) and the Global Fire Emissions Database (GFED) use bottom-up approaches, but with some important differences. GFED relies on post-event BAs detection and a fuel load estimated through a vegetation growth model, and is not available in real time. GFAS uses real-time observations of FRP converted into combusted matter through static coefficients, and is used in real time to provide the boundary conditions of fire emissions to the vastly used atmospheric composition configuration of the Integrated Forecast System (IFS-COMPO). The two systems are not independent from each other with GFAS conversion coefficients calibrated so total annual emissions match the output of GFED, which is considered the benchmark.
Kaiser et al. (2012) found that despite the ad-hoc calibration performed, the global budget of fuel burnt (often referred to as dry-matter, DM) estimated from the GFAS system was substantially divergent to what is independently observed from MODIS (Moderate Resolution Imaging Spectroradiometer) when converted into aerosol optical depth (AOD). As AOD measurements are operationally assimilated by IFS-COMPO, some of GFAS fire emission components such as black carbon or organic matters are inflated with factors that vary from 6.12 to 3.4 to avoid rejection of valid observations in the assimilation cycle. The large errors introduced by the inaccuracies in the dry-matter estimation are known and the use of inflation factors is broadly accepted in other operational systems. It is also recognized that in regions of small undetected fires, the required multiplicative factors could be much larger.
Recently, to overcome these limitations, top-down methods have been proposed to eliminate the need for the explicit knowledge of fuel loads to derive fire emissions. Mota and Wooster (2018) showed that satellite-derived fire radiative energy (FRE) can be used to directly quantify AOD, therefore short-cutting the need for dry-matter estimations. The benefit of the top-down approach is that it provides a framework to use FRE observations directly into assimilation systems of composition models. However, top-down methods hinder the potential benefit of an explicit representation of vegetation processes in Earth system models. Information about the fuel available for burning is crucial in fire forecasting and prevention systems. Indeed fuel, weather and local topography, are the only controlling factors of fire intensity and spread. Thus, methods to properly estimate fuel load and the amount of dry-matter released during burning are not only important for fire emissions but also in fire behavior models fire management practices and early warning systems.
One of the main problems is that daily fuel monitoring at a spatial scale useful for operational forecasts (order of 10–50 km) is still a challenge globally. Global applications must rely on remote sensing observations either by image classification and photo interpretation or by indirect mapping which derives fuel information from vegetation optical signals from active or passive microwave measurements and lidar sensors. In the last 10 years, two L-band microwave sensors have been performing systematic observations: the Soil Moisture and Ocean Salinity (SMOS) satellite launched by ESA in November 2009, and the Soil Moisture Active Passive (SMAP) satellite, launched by NASA in January 2015. In particular, the full-polarization and multi-angular capabilities of SMOS allow the simultaneous retrieval of the soil moisture content and L-vegetation optical depth (L-VOD). Recently Rodríguez-Fernández et al. (2018) found a quasi-linear relationship between L-VOD and benchmark estimations of above ground biomass (AGB) derived through complex data fusion of in-situ inventory plots, lidar observations and optical and microwave imagery. The logistic relationship found opens a way to obtain real-time AGB estimations from L-VOD with future missions with L-band capabilities (e.g., L-ROSE, SMAR) also providing potential avenues for real-time estimations.
Here we explore the benefits of directly employing AGB estimations as a proxy for fuel load by deriving estimates of dry-matter released into the atmosphere and comparing with those derived through FRP conversion. Fuel load is transformed into combusted dry-matter by employing two different datasets of BAs and an assumption on combustion completeness. To validate the quality of the dry-matter estimates, these are then converted into biomass burning aerosols and compared with independent AOD measurements. The direct use of AGB, while maintaining a bottom-up approach, could substitute the modelling method used for vegetation growth proposed by GFED and could provide an estimation of dry matter which is independent of the fire emitted energy as in the GFAS system. By exploiting independent observations to derive dry-matter estimations this will limit the need for the application of inflation factors to the final AOD estimations.
Aerosol scattering influences the retrieval of the column-averaged dry-air mole fraction of CO2 (XCO2) because of its effect on the path of solar radiation. Accurate CO2 retrievals depend, in part, on separation of aerosol and surface contributions to the measured radiance. However, for any given wavelength and aerosol type, there is a value of the surface albedo, referred to as critical albedo, where the top of atmosphere radiance in non-absorbing channels is completely insensitive to the aerosol loading. This is the situation for dust aerosol over deserts in the O2 A-band spectral region. The Orbiting Carbon Observatory-2 (OCO-2) measures XCO2 from space using three near-infrared bands, including the O2 A-band for aerosol estimation. However, the aerosol optical depth (AOD) retrieved by the OCO-2 full physics retrieval algorithm correlates poorly with ground-based AERONET estimates. This is especially true over deserts. Indeed, spurious XCO2 plumes have been observed over places such as Riyadh, Saudi Arabia.
We use a spectral sorting approach (Zeng et al., 2018, 2020) to train a two-step neural network that retrieves both aerosol loading (AOD) and vertical distribution (aerosol layer height; ALH), along with their uncertainties. The neural network prediction has a very good correlation with CALIPSO measurements. This information is used as a priori for retrieving XCO2, in order to correct for the bias in the OCO-2 Level-2 standard retrieval results. In addition, we use synthetic simulations to show that the correction using the improved a priori is physically realistic. Our results show that this correction is significantly better than the globally fitted bias correction in the OCO-2 LITE products. In particular, the LITE products have a bias of up to 3 ppm for the scenarios studied over Riyadh. The neural network methodology could have important implications for CO2 flux inversions.
© 2021 California Institute of Technology. All rights reserved.
This work presents the application of the OCRA/ROCINN algorithms for the retrieval of cloud macrophysical properties from deep-space ultraviolet (UV), visible (VIS) and near-infrared (NIR) measurements acquired by the EPIC/DSCOVR instrument.
EPIC (Earth Polychromatic Imaging Camera) on board DSCOVR (Deep Space Climate Observatory) is an instrument with 10 channels measuring from UV to NIR. Due to the location of DSCOVR, which orbits in a Lissajous orbit around the Lagrangian point L1 (1.5 million kilometers far from Earth), EPIC offers a unique view of the Earth, with several daylight observations per day for most of Earth locations and a roughly 10-kilometer ground resolution at the image center. Among its channels, two reference-absorption pairs in the oxygen A- and B- bands are available, which may be used for the retrieval of cloud properties.
We adapted the OCRA algorithm (Optical Cloud Recognition Algorithm) to estimate the radiometric cloud fraction as observed by EPIC/DSCOVR in the ultraviolet and visible range. This radiometric cloud fraction is fed as input for the ROCINN algorithm (Retrieval Of Cloud Information using Neural Networks), in which the cloud-top height and the cloud optical thickness are retrieved by means of a regularised Gauss-Newton inversion. The forward model of this optimisation process is a set of two artificial neural networks trained to provide clear-sky and liquid-water cloudy-sky measurements. The training datasets for these neural networks are generated using the radiative transfer model DOME (Discrete Ordinates method with Matrix Exponential).
Together with the presentation of the latest version of the EPIC OCRA/ROCINN products, we provide an initial comparison with the MODIS daily cloud products and describe the current algorithm challenges.
Retrieval of water and ice cloud properties for TROPOMI/Sentinel-5P
Ana del Águila(1)*, Ronny Lutz(1), Víctor Molina García(1), Fabian Romahn(1), Diego Loyola(1)
(1) Remote Sensing Technology Institute, German Aerospace Center (DLR), 82234 Weßling, Germany
The accurate retrieval of water and ice cloud properties is crucial for satellite sensors to further understand the cloud physical processes in climate and weather prediction models. Satellite missions like Sentinel-5 Precursor (S5P) are designed to monitor the air quality through investigation of trace gas and greenhouse retrievals. It is of great importance to have precise information about the cloud microphysical, macrophysical and optical properties in order to retrieve accurate trace gas information. In this regard, the operational cloud products from TROPOMI developed by the German Aerospace Center (DLR) are used for enhancing the accuracy of trace gas retrievals and to extend the satellite data record of cloud information. The cloud retrieval algorithm used is ROCINN (Retrieval Of Cloud Information using Neural Networks) which retrieves the cloud top height (CTH), cloud optical depth (COD) and cloud albedo (CA) from measurements in the NIR in the O2 A-band (758-771 nm). Two cloud models are considered for ROCINN retrieval: (1) Clouds as Reflecting Boundaries (CRB) and (2) Clouds As scattering Layers (CAL). In a new implementation, the ice cloud retrieval is performed using the forward model VLIDORT radiative transfer (RT) code version 2.8.3 containing ice cloud parametrization. Specifically, the microphysical and optical properties for ice crystal parametrization are based on Baum et al. [1]. Therefore, it is possible to perform retrievals for ROCINN_CAL for water clouds and ROCINN_CAL for ice clouds.
Several test scenarios adapted from Level 2 operational data are used to investigate the performance of ROCINN for retrieving water and ice clouds. The dataset contains fully and partially cloudy scenarios for water and ice clouds placed at different CTH and for different COD. The retrieved CTH and COD obtained are shown for both water and ice clouds. Furthermore, the impact of using ROCINN for water clouds when there is ice cloud presence is evaluated for the polar sun-synchronous orbit TROPOMI/S5P satellite.
[1] Baum B.A., P. Yang, A.J. Heymsfield, A. Bansemer, B.H. Cole, A. Merrelli, C. Schmitt, C. Wang, Ice cloud single-scattering property models with the full phase matrix at wavelengths from 0.2 to 100µm, J. Quant. Spectrosc. Radiat. Trans., 146, 123-139, 2014.
To support microwave remote sensing of hydrometeor contents an extensive database of single scattering properties has been created. The database is of general character but is specially tailored to match the input format expected by ARTS (Atmospheric Radiative Transfer Simulator). Another strong consideration for the database are measurements in the sub-mm domain, as the upcoming Ice Cloud Image (ICI) mission. The main part of the database covers ice particles having total random orientation (TRO), at three temperatures and between frequencies of 1 and 886 GHz. This TRO part contains data for 35 habits, each having particles of at least 30 sizes. These habits represent pristine crystals, aggregates, hail and graupel. When TRO not applies, it is generally believed that ice hydrometeors still exhibit azimuthally randomly orientation (ARO). The ARO case requires much more resources, both in terms of production and storage of the data, and so far just two such habits have been added to the database. Single scattering data for both spherical and flattened liquid drops have also been added.
The ultimate objective of the database is to provide means for full and consistent simulations over the microwave region (including sub-mm), involving both passive and active observations. The presentation covers both ongoing extensions of the database and tests of the existing data are discussed.
The ambition is to add data for melting ice to also cover mixed-phase particles. The database has been used in several studies where a fair match between simulated and observed radiances has been obtained. The habit "large plate aggregate" consistently comes out a good candidate for a "one size fits all" microwave particle model. Acceptable retrievals involving radar and a passive data up to 874 GHz have been performed, indicating that the objective of the database is in reach. However, particle orientation is still a challenge. In the lack of ARO data for multiple habits, an approach where TRO data are scaled to approximate the impact of particle orientation has been assessed. The approach has been shown to work well for passive conically scanning instruments. This approach and parts of the ARTS database have been integrated into the most recent version of RTTOV-SCATT, the microwave scattering solver applied at ECMWF and other weather centra. Scaling factors to also handle particle orientation in simulations of cross-track scanning and radar observations are being derived.
We present results of EUMETSAT’s study “Cloud Top Pressure Development from Sentinel-3 OLCI”, which focusses on the development of a scientific high-quality level-2 cloud top pressure (CTP) product. The retrieval uses OLCI’s top-of-the-atmosphere (TOA) solar radiances measured in the near-infrared spectral bands, which includes all three O2 A-band channels at 760nm. The major problem of the O2 A-band CTP retrieval is the photon penetration depth, which depends mainly on the vertical distribution of cloud properties. This is treated by introducing two additional state variables describing the vertical structure of the cloud extinction: the cloud geometrical thickness (CGT) and the ‘centre of gravity’ (CoG, the vertical position of the maximum scattering extinction). Fortheron the consideration of the spectral characteristics of each pixel and the incorporation into the algorithm is of high importance.
The output of the retrieval process are CTP, CGT and CoG. The look-up tables, used in the retrieval, have been built on line-by-line and multiple-scattering simulations which have been extended to consider the O2 A-band continuum absorption and variable cloud profiles. The quality of the new algorithm will be demonstrated based on comparisons with Lidar cloud-top heights, taken during the EURC4A campaign in 2020. Further comparisons with one year of ground based Radar / Lidar measurements at the Climate Research Facility of the US Department of Energy ARM in Oklahoma will be presented.
The SARAL/AltiKa mission and its Ka-band nadir altimeter offers a unique opportunity to assess the impact of large atmospheric attenuation onto the radar altimeter. The use of Ka-band for the radar altimeter allows a reduction of the spatial resolution from about 15 km with the historical Ku-band instruments to about 4 km. But it also comes with a larger sensitivity to the atmospheric attenuation, especially under rainy conditions.
As the radiometers on-board altimetry mission have a coarser spatial resolution (12 km for MWR on board SARAL/AltiKa), a new approach is proposed here, based on the 40 Hz sigma naught (one point every 175 m), to characterize the impact of rain cells onto the measurements and anticipate the availability of the observations performed by future two-dimensional swath Ka-band altimeters.
The present study uses for the first time directly the timeseries of the Ka-band altimeter backscattering coefficient when previous studies relied on microwave radiometer (MWR) observations or model analyses with coarser resolutions. The Attenuation CElls Characterization ALgorithm (ACECAL) approach combines low-filtering and non-linear fit to retrieve the amplitude of the atmospheric attenuation at Ka-band, the size and the occurrences of rain cells. It not only provides more representative statistics on rain cells (occurrences, amplitude, size), but also describes the internal structure of the cells.
At global scale and for a nadir instrument, the number of observations strongly impacted by the atmospheric attenuation is limited, with a proportion of observations belonging to rain cells lying between 5% and 10%.
Concerning the atmospheric attenuation within the rain cells, the previous studies relied on radiometer observations or model analyses and thus under-estimated the actual amplitude of the atmospheric attenuation caused by rain by a factor four: the global median attenuation under-rainy situation is about 3.5 dB and 10% of the attenuations are larger than 13 dB.
One originality of the method presented here is also to provide robust statistics on the rain cells diameter, their occurrences and geographical distribution. The median is 15 km, 10% of the rain cells have a size larger than 41 km, and the size are also larger at higher latitudes than over the tropics.
This work also demonstrates the capability of Ka-band radar altimeter to provide observations strongly consistent to the measurements provided by missions dedicated to the precipitations, as the precipitation radar on-board the TRMM mission. The retrieval of rain rate and rain cell size but also the characterization of the internal peaks, if they are distributed as secondary products of altimetry missions would certainly benefit to the community, especially if the approach can be generalized to the future two-dimensional swath altimetry missions.
Forward models are a key tool to compare observations and models by converting the output of atmospheric numerical models to synthetic observations. By the capability to create such synthetic observations, these forward models are very beneficial for studies related to future satellite missions like MWI and ICI flown on MetOp-SG. Such tools can help to understand the expected observations. They are also an integral part of inversion algorithms that aim to retrieve geophysical variables of interest from observations.
Here, the comprehensive microwave forward model PAMTRA (Passive and Active Microwave TRAnsfer) is introduced, which can simulate passive and active measurements across the microwave spectral region up to 800 GHz. The passive forward model in PAMTRA provides up- and down-welling polarized brightness temperatures and radiances for arbitrary observation angles, while the active forward simulator is capable of simulating the full radar Doppler spectra and its moments. Both can be applied to arbitrary plane-parallel atmospheric scenes, including those with complex hydrometeor assumptions. PAMTRA implements various gas absorption models and methods for the approximation of the scattering properties (Mie, T-matrix, DDA, self-similar Rayleigh-Gans) and uses the same for the passive and active forward simulations. To give an estimate of the surface emmissivity of ocean and land needed for passive microwave applications, several tools are included. The PAMTRA framework includes interfaces to various atmospheric models and considers their respective assumptions in the microphysical schemes with different complexity like one- or two-moment schemes or full bin microphysics. The core module is written in FORTRAN90, whereas the framework and user interface are python based. Therefore, the model is easy to use and extendable.
In this presentation we will introduce the complete PAMTRA framework. By various examples, we will furthermore demonstrate PAMTRAs capabilities to simulate active and passive observations for space, air, and ground based instruments by making use of cloud resolving model output and measurements from various campaigns.
The dataset collected during the Radar Snow Experiment (RadSnowExp) presents the first-ever triple-frequency radar sensing combined with almost perfectly co-located and coincident in situ microphysics probes on board a single airborne platform, the National Research Council Canada (NRC) Convair-580 aircraft. The whole RadSnowExp dataset includes more than 12 hours of flight data in mixed phased and glaciated clouds with more than 3.4 hours when the scattering was non-Rayleigh for at least one of the radar frequencies. The potential of this dataset is illustrated using data collected from a selected flight during an Arctic storm, which covers a wide range of snow habits from pristine ice crystals, low density aggregates to heavily rimed particles with maximum size exceeding 10 mm. Three different flight segments with well-matched in situ and radar measurements were analysed, giving a total number of 49 minutes of triple frequency observations. In addition, the in situ particle data for this study include high resolution imagery from the Cloud Particle Imager (CPI), which allows accurate identification of particle habits, including rimed crystals and large aggregates, within the dual-frequency ratio (DFR) plane. The airborne triple-frequency radar data are grouped based on the dominant particle compositions and microphysical processes (level of aggregation and riming). The results from this study are consistent with the main findings of previous modelling studies, with specific regions of the DFR plane associated with unique scattering properties of different ice habits, especially in clouds where radar signal is dominated by large aggregates. Moreover, the analysis shows close relationships between the triple-frequency signatures and cloud microphysical properties (particle characteristic size, bulk density, and level of riming).
The Arctic radiative budget is strongly influenced by cloud variability and properties, and many processes of cloud formation-evolution-dissipation still need further and deeper understanding. The spectral signature of clouds, i.e, the radiances, can be recorded and used to discriminate the thermodynamic phase, the effective radius and the optical depth when different spectral features are analysed, as suggested by, e.g., LeBlanc et al. (2015) and Marshak et al. (2004). Mixed-phase clouds represent the most complex case, while liquid and ice clouds display characteristics more homogeneous and easier to model and interpret. In the Arctic environment, thin liquid clouds with low content of water ( < 20 g/m^2) are particularly interesting due to their radiative effects which can produce significant melt, as happened in the event analysed by Bennartz et al. (2013). Moreover, satellite retrievals need to be validated against ground-based measurements and there is a general lack of such datasets.
The Thule High Arctic Atmospheric Observatory (THAAO, 76.5N 68,8W, elev. 220 m a.s.l. ) is an international observatory hosted by the United States Space Force at Thule Air Base, Greenland Various Italian (INGV, ENEA, Univ. of Florence, Univ. Sapienza) and North American (NCAR, NASA, NSF, Univ. of Alaska) institutions have deployed instrumentation that is operating at the THAAO. During the last two decades, many measurement campaigns have been conducted at THAAO and produced several different datasets of massive relevance in the field of atmospheric physics and Arctic climate. Among the operational long-term observations, there are upward and downward solar irradiances, atmospheric profiles of water vapour and temperature, columnar water vapour, cloud height and depth. A zenith sky spectrophotometer (310-950 nm) has been recently installed to help in assessing cloud properties from the measured radiances. Most of these datasets are publicly available on the THAAO web portal (https://www.thuleatmos-it.it/).
The collected observations are being integrated with the aim to 1) characterise cloudiness above the observatory; 2) verify and improve clouds classification through the use of shortwave and longwave irradiances; 3) provide insights on the individuation of mixed-phase clouds from spectral radiances; 4) compare the results with satellite remote sensing observations. Preliminary results on a few case studies relative to properties and effects of liquid homogeneous clouds, and on the integration of different measurements for the derivation of cloud properties will be presented.
Stratiform rain and the overlying ice that generates it are leading components of the Earth's climate system. From a microphysics perspective, it’s the ice mass concentration and its size
that exert first-order control on the mass flux through the cloud, and by extension the underlying rain rate. The Dual-frequency Precipitation Radar (DPR) on NASA's core Global Precipitation Measurement satellite is the first space-borne instrument that offers the opportunity of a dual-wavelength radar retrieval of these variables.
Our focus is placed on DPR observations over the precipitation columns with a confidently identified bright band. The analysis indicates a sharp increase in retrieved mass flux from ice to rain phase in the current algorithm, which is inconsistent with the expectation that mass flux varies little across the bright band under conditions of stratiform rain.
The algorithm proposed here imposes a continuity of the precipitation rate across the bright band which additionally helps in deriving bulk ice density. It is based on Bayes' rule with riming parameterized by the “fill-in” model. The radar reflectivities are simulated using the scattering models corresponding to realistic snowflake shapes. The algorithm is validated using the co-located polarimetric radar data collected for the GPM ground validation program. In future, this dataset will be used for multi-frequency radar studies that aim at constructing high quality training datasets for artificial intelligence algorithms that are necessary to analyse the huge volume of data that will be generated by upcoming space missions such as the one proposed by Tomorrow.io.
The C2OMODO (Convective Core Observations through MicrOwave Derivatives in the trOpics) project proposes to characterize convective clouds via a convoy of two satellites observing the atmosphere with microwave radiometry. The originality of C2OMODO is the slight time delay (by 30 to 120s) of the two satellites thus providing a temporal derivative of high-resolution brightness temperatures (TB). This concept is currently part of the French contribution to the NASA Aerosol, Clouds, Convection, Precipitation (ACCP) project, to mitigate the ‘Convection-Precipitation’ objectives.
The two radiometers, a new generation microwave sounders named SAPHIR-NG, are adapted from the successful SAPHIR instrument on-board the MeghaTropiques mission. They benefit from technology evolutions and improvements achieved these past years, thus enhancing the measurement performances, in terms of resolution, and spectral considerations.
The success of this concept depends on the accuracy of the collocation of the observations performed by the two instruments.
This study funded by CNES assesses the impact of the collocation discrepancies onto the error on the difference between successive TB. It is proposed to independently interpolate the observations of both mission on a common regular grid in longitudes/latitudes, using the Backus-Gilbert (BG) approach to mitigate the interpolation error. The performance of the BG solution is also assessed for three different instrumental configurations in terms of rotor speed.
Performances are provided using synthetic observations over a high-resolution simulation of the thundercloud named Hector, obtained using MesoNH, the mesoscale atmospheric model.
The conclusion shows that a higher rotation speed associated to a denser coverage of the scene by the Fields Of View (FOV) is the most adapted configuration. The error due to the interpolation process is acceptable at the cost of a degradation of the initial spatial resolution of the FOV by a factor 2.
Utilising satellite imager measurements to retrieve cloud properties requires knowledge of the complex refractive index, size distribution, shape and habit of the water and ice particles that potentially make up the cloud of interest. Further complications arise in the case of quantifying a volcanic ash cloud where ash can be dispersed over lower-level water clouds, upper-tropospheric ice clouds or produce ice/water/ash mixtures. Volcanic ash cloud retrievals are commonly performed using longwave (thermal infrared) measurements due to the strong absorption peak of SiO2 near the 9.5 micron wavelength and the day/night operational requirement of the Volcanic Ash Advisory Centres. In contrast, cloud retrieval methodologies often employ shortwave channels (visible and near-infrared) to retrieve cloud optical properties (i.e. optical depth and effective radius). Here we combine shortwave and longwave measurements from the Advanced Himawari Imager (AHI) aboard Himawari-8 to better characterise water, ice and ash cloud properties during the June 2019 Raikoke eruption. This eruption produced ash that dispersed over widespread low-level stratus cloud, mid-level (~5 km asl) water cloud and appears to have initiated ice nucleation during the early stages of the eruption.
The Optimal Retrieval of Aerosol and Cloud (ORAC) algorithm uses the optimal estimation approach to retrieve state variables by minimising a cost function and has been developed so that the single-scatter properties of cloud, aerosols and ash can be easily modified within the forward model to represent complex atmospheric states. In addition, ORAC can be used to retrieve cloud properties in multi-layered scenarios (e.g. ash over water or ice cloud) and include various ice particle habits (e.g. spherical, aggregate solid columns, general habit mixtures and solid columns) and ash types (e.g. Chaiten, Mt Spurr and Eyjafjallajokull ash). We show that comparisons of the cost at solution can be used to identify the best-fitting single-scattering properties on a per-pixel basis. We also show that the cost can be used to distinguish between tropospheric and stratospheric ash layers in addition to identifying multi-layer cloud vs. single-layered cloud and we verify these inferences with collocated CALIPSO data. Our work demonstrates that the complex refractive index data for not just ice and water clouds are critical for this type of analysis but also require laboratory measurements of several different types of ash. We show that the retrieval results of meteorological cloud properties that use shortwave channels can be used in more sophisticated forward models where multi-layered cloud scenarios involving volcanic ash overlying water or ice clouds must be considered.
EUMETSATs cloud microphysics algorithm OCA (Optimal Cloud Analysis) developed and operated on MSG SEVIRI data since 2013 will form the basis for initial operations with MTG-FCI. In recent years, several improvements to the baseline algorithm have been identified and tested and these will be incorporated into the FCI version at the earliest opportunity after launch and commissioning in 2023.
The retrieval method consists of an initialisation step to set the phase (ice, liquid) of the cloud in the FCI observation followed by a cost function minimisation process using a fast radiative transfer model (RTM) to simulate the visible, near-infrared and thermal channel measurements for a particular cloud state (height, optical thickness and particle effective radius). This generally results in the retrieved products and estimated errors with the assumption that the cloud exists as essentially a single layer. Diagnostics of the retrieval are used however to detect the presence of multi-layer or ‘overlapping’ clouds and in these (some ~20% of cases) directs the algorithm to reprocess assuming two cloud layers, using thermal infrared channels only.
The identified improvements to the scheme arise in 3 areas:
• The initialisation of cloud phase and the identification of multi-layer clouds can be improved using the technique of effective absorption optical depth ratios (Pavolonis, 2010). The optimum multi-layer detection appears to be a combination of this technique and the original internal diagnostics information.
• The treatment of the multi-layer cases using thermal infrared channels only is computationally efficient but sub-optimal and results from the re-use of the single layer fast RTM. By treating the radiative interactions between the two layers correctly and enabling full use of the solar channels, both the retrieved effective radii and lower cloud optical thickness are improved compared to the original method.
• The fast RTM employs Lookup Tables (LUTs) of pre-calculated cloud radiative properties. These are based, as is usual, on the assumption of clouds as optically thick but geometrically thin scattering layers. A result of this assumption is that the cloud tends to be placed by the retrieval at the effective radiating height of the real cloud, i.e. some considerable distance (typically 1-3 Km for ice and rather less for liquid clouds) below the real cloud top as would be defined by the first significant cloud particle concentrations. By populating the LUTs with radiative properties calculated with a vertically inhomogeneous cloud model based on CloudSat/CALIPSO observations this bias in cloud height is significantly reduced.
We describe briefly the retrieval scheme and the upgrades and demonstrate the improved performance with reference to the DARDAR L2 synergistic product from CloudSat/CALIPSO.
An algorithm based on triple-frequency (X, Ka, W) radar measurements that retrieves the size, water content and degree of riming of ice clouds is presented. This study exploits the potential of multi-frequency radar measurements to provide information on bulk snow density. The presented algorithm is based on Bayes' rule with riming parameterized by the “fill-in” model. The radar reflectivities are simulated using the scattering models corresponding to realistic snowflake shapes. The algorithm is tested on multi-frequency radar data collected during the ESA-funded Radar Snow Experiment. During this campaign in-situ microphysical probes were mounted on the same airplane as the radars. This nearly perfectly collocated dataset of the remote and in-situ measurements gives an opportunity to derive a combined multi-instrument estimate of snow microphysical properties that is used for a rigorous validation of the radar retrieval. Results suggest that the triple-frequency retrieval performs well in estimating ice water content and mean-mass-weighted diameters obtaining root-mean-square-error of 0.13 and 0.15, respectively for log10(IWC) and log10(Dm). The retrieval of the degree of riming is more challenging and only the algorithm that uses Doppler information obtains results that are highly correlated with the in-situ data.
The Central Andes are characterized by a steep climatic and environmental gradient with large spatial and temporal variations of associated hydrological parameters. There are two main atmospheric processes that influence climate conditions in our study area in northwestern Argentina: the South American Monsoon System (SAMS) that transports moisture from the Atlantic Ocean via the low-level jets and the orographic barrier of the Eastern Cordillera that forces focused rainfall at the windward facing slopes.
Our research aims at monitoring water vapour (WV) in the south-central Andes, in order to track moisture propagation. In accordance with the needs of the research, we processed data from two new Global Navigation Satellite System (GNSS) ground stations that were installed in spring 2019 along with - already calculated - solutions that were derived from an existing network. We used 10 year-long time series from 31 stations spanning an altitude range from 198 to 5141m asl and stretching from the mountain front to the interior of the mountain range. This enhanced network helped us to examine spatial correlations, as well as differences in behaviour of the WV across the climatic gradient. Moreover, we retrieved the gradients of the WV at single positions, in order to study seasonal correlations between wind and gradient direction.
In this investigation, both the zenith wet delays (ZWDs), which results in the WV, and its gradients that point the azimuthal direction, to which the wet delay is greater, are taken into consideration. The analysis steps include classification according to the monthly averaged WV values, spectral analysis, signal decomposition, frequency-magnitude analysis, examination of the WV propagation during heavy precipitation events and analysis of the gradients of the ZWDs. We have found that GNSS is better suited to track moisture propagation compared to the state-of-art atmospheric reanalysis (ERA5). Our analysis also suggests that the topography surrounding strongly a GNSS site affects the shape of WV seasonal patterns and the relation between wet gradients and wind.
Without gravity waves the atmospheric circulations as we know them can not exist. Despite the importance of gravity wave refraction it remains largely neglected by the community. Detailed model validation of horizontal refraction and oblique propagation by observation is still sparse. Refraction makes an important contribution to the amount of gravity wave momentum flux and affects the location of its deposition. Refraction results in a shorter (longer) gravity wave horizontal wavelength and increases (decreases) the gravity wave momentum flux of the wave packet. In an attempt to improve our understanding of the propagation and dissipation of gravity waves the SouthTRAC campaign was performed with a focus on the Southern Andes hotspot of gravity wave activity. During the campaign a minor sudden stratospheric warming occurred, heavily influencing gravity wave propagation and refraction, thereby influencing the location of gravity wave momentum flux deposition. This study uses observational data from this campaign collected by the German research aircraft HALO on 12 September 2019, the last flight when GWs still could propagate across the stratopause.
Temperature observations include, amongst others, measurements from the troposphere and lower stratosphere collected by GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere), and the stratosphere and lower mesosphere collected by ALIMA (Airborne Lidar for the Middle Atmosphere). GLORIA is the airborne demonstrator of the satellite based infrared limb imager proposed for the Earth Explorer 11 candidate CAIRT. ALIMA produces a curtain retrieval from 20km to 60km along the flight path. However, a single curtain of observations does not provide accurate horizontal wavelength and wave orientation of observed gravity waves. But through creative flight planning we have data in a racetrack (2 parallel flight legs), which we can combine to get an accurate wavelength and wave orientation. For the first time ALIMA data can be used on its own to obtain the 3-D wavevector.
Refraction is identified in multiple different gravity wave packets between ~4km and 58km. One gravity wave packet observed by GLORIA and one observed by ALIMA are used to study and explain refraction. Supplementing the observations are the GROGRAT (Gravity-wave Regional Or Global Ray Tracer), a simplistic mountain wave model, ERA5 (European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis 5th Generation) data, high-resolution (3km) WRF (Weather Research and Forecasting model) data and satellite imagery. We observe a 25% increase in gravity wave momentum flux which is attributed to refraction. Contrary to previous low-resolution model studies (like Hasha et al., 2008) we find that in this case refraction makes a noteworthy contribution in the amount and the location of gravity wave momentum flux deposition. This work illustrates the capability of proposed space missions.
High altitude and long range airborne observations allow us to probe mesoscale signatures of gravity waves at remote locations of the world. The SouthTRAC-GW (Southern hemisphere Transport, Dynamics, and Chemistry - Gravity Waves) mission took place in September and November 2019 and aimed at probing gravity waves in the hotspot region around South America and the Antarctic peninsula. During SouthTRAC, the instruments GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere), ALIMA (Airborne LIdar for Middle Atmosphere research) and BAHAMAS (Basic Halo Measurement and Sensor System) were deployed onboard the German HALO (High Altitude and LOng Range Research Aircraft) for detailed and focused studies of gravity waves in the southern hemisphere from the troposphere to the mesosphere. GLORIA is an imaging infrared limb sounder and provides measurements of temperature, trace gases and cloud parameters in the upper troposphere and lower stratosphere (UTLS) with high spatial, temporal and spectral resolution. The instrument is the airborne demonstrator of the satellite based infrared limb imager CAIRT (Changing-Atmosphere Infrared Tomography) that is proposed for the Earth Explorer 11 mission. Here, we present observations of non-orographic gravity waves by GLORIA, ALIMA and BAHAMAS in the vicinity of the subtropical jet stream above the South Atlantic and comparisons with ECMWF IFS (European Centre for Medium-Range Weather Forecasts – Integrated Forecasting System) high-resolution forecasts. The observed 2D distributions of temperature perturbations constructed from the GLORIA and ALIMA data show mesoscale patterns of non-orographic gravity waves, while the 100 Hz BAHAMAS in situ observations show further gravity wave signatures at the sub-mesoscale. The observations allow us to test the representation of non-orographic gravity waves in IFS forecasts in a complex scenario. Using the IFS, we furthermore analyse the forcing conditions of the observed scenario. Our study provides a perspective of the capabilities of proposed future space missions, which would allow global and continuous observations of gravity waves that would include remote locations of the world.
Polar winter descent of odd nitrogen produced by energetic particle precipitation represents an important vertical coupling mechanism transferring space weather signals from the lower thermosphere down to the polar stratosphere. Associated modulations of polar ozone affect temperature and winds, thus contributing to natural climate variability. Despite recent advances in disentangling this coupling chain, large uncertainties remain due to the lack of continuous and spatially resolved observational data. In addition, polar winter descent is not well reproduced in climate models due to missing constraints on the wave driving of this circulation.
The Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT), selected for Phase 0 as one of four candidates for Earth Explorer 11, will provide unique constraints on this coupling mechanism, representing a major step towards a quantitative understanding of space weather impacts on stratospheric ozone and natural climate variability. It will also provide unprecedented information on transport, mixing and the driving of the large-scale circulation by different types of waves.
CAIRT will observe the Earth’s limb in a vertical range from 5km to 115 km with an imaging Fourier-transform spectrometer. Flying in formation with MetOp-SG will allow to exploit synergies with the New Generation Infrared Atmospheric Sounding Interferometer (IASI-NG) and Sentinel-5, resulting in consistent atmospheric profile information from the surface up to the lower thermosphere. CAIRT will provide global observations of ozone, temperature, odd nitrogen and secondary reactive nitrogen compounds, long-lived tracers, as well as water vapour, key halogen species and tropospheric pollutant plumes, independent on solar illumination and at a much higher horizontal resolution and coverage than achieved from space so far.
Here, we will focus on the expected science benefits of the proposed CAIRT mission with respect to space weather impacts on stratospheric ozone and natural climate variability and report on first results from the ongoing Phase 0 science study.
An attempt has been made to inventory of soil resources of Evora region of Portugal using Landsat 8 OLI satellite images acquired in the dry and rainy seasons to investigate the hypothesis that there is a relationship between Landsat spectral reflectance and certain soil types and that this relationship can be used to map soils with reasonable accuracy. The study revealed that digital analysis of Landsat 8 OLI images has the capacity to map and delineate soil patterns with reasonable accuracy, especially when acquired during the dry season when there are long periods of cloud free skies, low soil moisture and minimal vegetation cover in the semi arid area such as like Evora district of Portugal. Landsat 8 OLI satellite images of July, 2018 (dry season) and December, 2018 (rainy season) has been used for this study. Other than Landsat 8 OLI images, high resolution satellite images (i.e., Quickbird, IKONOS, Sources: Google Earth), available soils map of Evora region also correlated with Landsat satellite images. The major soil classes found are siliceous loamy, siliceous heavy, intermediate loamy, basic heavy and alluvial. The overall agreement between the Landsat classification and reference data was 72%, indicating a definite relationship between Landsat imagery and soil types. In terms of soilscape boundary delineation, the Landsat derived map was had a higher level of agreement with field observations than the conventional soil map. In addition, the study showed that overall, upland areas have a better agreement with Landsat spectral data compared to lowland areas, probably due to the diverse origin of sediments and low spatial extent of most landforms in lowland areas.
In addition to providing food, fibres or fuel, soils provide clean water; they protect us from floods and preserve our cultural heritage. A lack of sustainable land use, combined with growing population densities, changes in human consumption patterns and increasingly extreme weather events degrade soil functions. Several soil-monitoring systems are present at individual Member States of the European Union. Occasionally, they are fragmented, incomplete and in general not harmonized. Remote sensing promises a systematic and comprehensive monitoring and assessment of policy-relevant soil issues.
Our project – as a Luxembourgish case study - supports the recent proposal by the European Commission for a Soil Health and Food Mission containing the ambitious challenge of ensuring that 75% of EU soils are healthy for food, people, nature and climate by 2030. The Ministry of the Environment, Climate and Sustainable Development and Trier University join forces to embark on mapping the soils in the Grand Duchy of Luxembourg in a spatially explicit manner with Copernicus Sentinel-2 data. We will explore the potential of multispectral satellite time series for countrywide mapping of soil properties and the possibility to initiate an operational soil monitoring system to fulfill the national mandate of watching over Luxembourg’s soils. The EU Soil Observatory was established to support corresponding strategies. In this context, the project collects soil information in a high temporal and spatial resolution.
We will utilize FORCE (Framework for Operational Radiometric Correction for Environmental monitoring: Frantz 2019) - an in-house software package designed for mass-processing satellite image archives such as the freely and openly available Sentinel-2 data stream from the European Copernicus Program. We will create a national data cube of analysis-ready data that will hold atmospherically and topographically corrected surface reflectance datasets (Frantz et al. 2016) as nadir BRDF-adjusted reflectance (Roy et al. 2017). The reflectance data will be accompanied by per-pixel quality information generated with an established state-of-the-art cloud detection algorithm (Frantz et al. 2018). The data will be co-registered with Landsat Collection 2 imagery (P. Rufin et al. 2021) to enable the application of time series methods. We will mine all available data to extract as much soil observations as possible through the means of pixel-based compositing. The resulting composite image will contain the maximum area of satellite-exposed bare soil surfaces, which will be the basis for several upcoming analyses and feasibility tests, including the application of supervised and unsupervised image classification methods to derive soil information layers like soil classes or the soil organic carbon content.
The project delivers harmonized and quality-assured data and techniques to track and assess progress by the EU in the sustainable management of soils and restoration of degraded soils. In this context, results will contain an assessment of areas with different soil characteristics, techniques for a monitoring of current and future changes of soil carbon conditions, soil sealing, and therefore a deeper comprehension of climate change.
References:
Frantz, David. 2019. “FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond.” Remote Sensing 11 (9): 1124. https://doi.org/10.3390/rs11091124.
Frantz, David, Erik Haß, Andreas Uhl, Johannes Stoffels, and Joachim Hill. 2018. “Improvement of the Fmask Algorithm for Sentinel-2 Images: Separating Clouds from Bright Surfaces Based on Parallax Effects.” Remote Sensing of Environment 215 (September): 471–81. https://doi.org/10.1016/j.rse.2018.04.046.
Frantz, David, Achim Röder, Marion Stellmes, and Joachim Hill. 2016. “An Operational Radiometric Landsat Preprocessing Framework for Large-Area Time Series Applications.” IEEE Transactions on Geoscience and Remote Sensing 54 (7): 3928–43. https://doi.org/10.1109/TGRS.2016.2530856.
P. Rufin, D. Frantz, L. Yan, and P. Hostert. 2021. “Operational Coregistration of the Sentinel-2A/B Image Archive Using Multitemporal Landsat Spectral Averages.” IEEE Geoscience and Remote Sensing Letters 18 (4): 712–16. https://doi.org/10.1109/LGRS.2020.2982245.
Roy, David P., Jian Li, Hankui K. Zhang, Lin Yan, Haiyan Huang, and Zhongbin Li. 2017. “Examination of Sentinel-2A Multi-Spectral Instrument (MSI) Reflectance Anisotropy and the Suitability of a General Method to Normalize MSI Reflectance to Nadir BRDF Adjusted Reflectance.” Remote Sensing of Environment 199 (September): 25–38. https://doi.org/10.1016/j.rse.2017.06.019.
Spatiotemporal data can be analyzed by applying spatial, time-series, and machine learning algorithms to extract regional biological soil crusts (biocrust) trends. The current study deals with image analysis of the spatial and temporal patterns created by biocrusts in the sandfield of the northwestern Negev in Israel. Biocrusts are thin layers of cyanobacteria and other organisms living at the topsoil but play a vital role in many low-productivity ecosystems, especially deserts. In the study area, the biocrusts stabilize the mobile dunes, enrich soil nutrients, change the water regime, and are responsible for further environmental changes. Analyzing the spatial trends of biocrusts through time, using satellite imagery, may improve the quantification and understanding of their change drivers. The present work strives to develop a unique framework for analyzing spatiotemporal trends of the spectral Crust Index (CI), thus identifying the drivers of the biocrusts’ spatial and temporal patterns. To fulfill this goal, CI maps, derived from 31 annual Landsat images, were analyzed by applying advanced statistical and machine learning algorithms. A comprehensive overview of biocrusts’ spatiotemporal patterns was achieved using an integrative approach, including a long-term analysis, using the Mann-Kendall (MK) statistical test, and a short-term analysis, using a rolling MK with a window size of five years. Additionally, temporal clustering, using the partition around medoids (PAM) algorithm, was applied to model the spatial multi-annual dynamics of the CI. A Granger Causality test was then applied to quantify the relations between CI dynamics and precipitation. The findings show that 88.7% of pixels experienced a significant negative change, and only 0.5% experienced a significant positive change. A strong association was found in temporal trends among all clusters (0.67 ≤ r ≤ 0.8), signifying a regional effect due to precipitation levels (p < 0.05 for most clusters). The biocrust dynamics were also locally affected by anthropogenic factors (0.58 > CI > 0.64 and 0.64 > CI > 0.71 for strongly and weakly affected regions, respectively). A spatiotemporal analysis of a series of spaceborne images may improve conservation management by evaluating biocrust development in drylands. The suggested framework may also assist in various disciplines related to quantifying spatial and temporal trends.
Soil organic matter is essential for preserving and maintaining a range of soil and ecosystem functions, as well sequestering carbon and contributing to climate change mitigation. Policy-makers are increasingly aware that reliable and accurate soil monitoring information is needed to support sustainable development in the context of the goals set by the Sustainable Development Goals (SDGs), and relevant EU policies (e.g., CAP, and Green Deal). In this framework, the ESA WORLDSOILS project aims to develop an operational system to provide annual estimations of topsoil organic carbon (SOC) at continental to global scales by exploiting Earth observation satellite data and leveraging large soil spectral libraries in modelling techniques to improve the spatial resolution and accuracy of SOC predictions.
Previous studies have shown that SOC can be accurately estimated using spectral models based on reflectance spectra of bare soils acquired in the laboratory after sieving and drying. However, variations in surface moisture, roughness and vegetation cover have a strong impact on the quality and accuracy of soil property prediction from remote sensing data. Depending on the fragmentation of the landscape, there is likely to be a mixture of soil and vegetation in the sensor’s field of view due to trees, crops or post-harvest crop residue. In addition, soils may contain excess water or residual moisture after rainfall events. To account for these disturbing effects and improve estimates of surface organic carbon content, two options can be considered: (1) composite bare soil images can be generated from time-series of remotely sensed images and a set of filtering criteria rules, or (2) correction techniques can be applied to account for disturbance effects on the spectral signal.
Here, Spatially Upscaled Soil Spectral Libraries (SUSSL) are built to simulate the described disturbance effects and generate landscape-like reflectance signals and asses the impact on SOC prediction performance. Specifically, the results of spectral models such as HySimCar (ray-tracing for soil/vegetation canopy reflectance), MARMIT (soil moisture modelling), and MODTRAN (atmospheric scattering of shaded areas) are combined in linear mixing to simulate the disturbed soil reflectance signal. The composed SUSSL is spectrally resampled to major hyperspectral and multispectral sensors of interest (Sentinel-2 MSI, Landsat 8 OLI, EnMAP, CHIME), and the impact of the disturbance effects on the accuracy in SOC prediction is estimated in three test cases: (1) the baseline bare soil scenario with 158 highly variable soil spectra from the LUCAS 2009 survey, (2) the unfiltered SUSSL including 150 mixed disturbance scenarios for each LUCAS soil spectrum taking into account variable amount and type of vegetation cover, variable soil moisture content and variable amount of shaded soils, and (3) the filtered SUSSL after application of a disaggregation approach. Preliminary results indicate that spectral disturbance effects compiled in the SUSSL do have a strong influence on the prediction accuracy of SOC models resulting in a noticeable loss in accuracy as an increase in RMSE e.g., of 11.8 g/kg (88%) for CHIME and 15.7 g/kg (64%) for Sentinel-2 MSI, compared with SOC accuracy based on the bare soil spectra. Furthermore, results show that the application of strict filtering criteria using spectral indices can severely improve SOC modelling performance especially for multispectral sensors, whereas hyperspectral sensors provide higher baseline accuracies even for disturbed soil cases.
Basic soil physical properties (i.e., soil texture and organic matter) and associated soil hydraulic properties (i.e., soil water retention curve and hydraulic conductivity) play an essential role in land surface models (LSMs) for soil moisture estimation. With the physical link between soil properties (i.e., texture, moisture and temperature and dielectric constant), soil physical properties on the spatial scale can be retrieved with a coupled LSM with a microwave radiative transfer model (RTM) in a data assimilation system. To investigate whether the Soil Moisture Active and Passive (SMAP) brightness temperature T_B^p (p= Horizontal or Vertical polarization) assimilation improves estimates of soil properties and their vertical descriptions and land surface states and heat fluxes, this paper couples an enhanced physically-based discrete scattering-emission model with the community land model v4.5 (CLM) and adopts the local ensemble transform Kalman filter (LETKF) algorithm for the retrieval, which is assited with the in situ measurements at the Maqu site on the eastern Tibetan Plateau. The impact of different polarization configurations on the retrieval is also investigated. The results indicate improved estimate of soil properties of the topmost layer compared to measurements, as well as of the profile using retrieved top-layer soil properties and a prior depth ratio. The use of T_B^H and T_B^V exhibits varied sensitivities to the retrieval of different soil compositions (i.e., sand and clay) and soil moisture estimates. However, analyses reveal that the obtained (retrieved) soil properties with a high accuracy alone are inadequate to lead to the improvement of soil moisture estimates compared to observations. The uncertainties in the CLM model structures, such as the fixed pedotransfer functions (PTFs), the hydraulic function describing soil water retention curve and the water stress function determining root water update, should be considered instead.
SOC prediction from remote sensing is often hindered by disturbing factors at the soil surface, such as photosynthetic active and non-photosynthetic active vegetation, variation in soil moisture or surface roughness. The removal of photosynthetic active vegetation is routinely done through an NDVI threshold. Spectral indices able to deal with the other disturbing factors use short wave infrared (SWIR) wavelength. Unfortunately, the current generation of satellites only has two broad bands in the SWIR that are not sensitive enough to detect moisture and residue signals. With the increasing amount of freely available satellite data, recent studies have focused on stabilizing the soil reflectance by building reflectance composites using time series of images. It is however unknown whether the resulting composite spectra mirror the reflectance fingerprint of the optimal conditions to predict topsoil properties (i.e. a smooth, dry and bare soil).
We have collected 342 photos of soil surfaces in the Belgium loam belt. The photos were taken during the months with a maximum extent of bare croplands when fields are prepared for seeding i.e. October 2019 (for the winter cereals) and April 2021 (for the summer crops). Four main classes of surface conditions were distinguished: smooth seeded soils, soil crusts, moist soils and soils covered by crop residues. Reflectance spectra were then extracted from the Sentinel-2 images coinciding with the date of the photos. The Normalized Burn Ratio (NBR2) was calculated to characterize the soil surface, and a threshold of NBR2 < 0.05 was found to be able to separate wet soils and soils covered by crop residues from dry bare soils. Additionally, we found that normalizing the spectra (i.e. dividing each spectral band reflectance by the mean of all spectral bands) allows for cancelling the albedo shift found between soil crusts and smooth soils in seed-bed conditions. We then built the exposed soil composite from Sentinel-2 imagery (covering the spring periods of 2016-2021), and found that for NBR2 < 0.5 the composite spectra are similar to the ones of soils in seed-bed conditions.
The exposed soil composites with NBR2 < 0.05 threshold and normalized spectra were used to predict SOC content by means of a Partial Least Square Regression Model (PLSR) with 10-fold cross-validation. We used Sentinel-2 tiles T31UFR, T31UFS covering the Belgian loam belt, and T31UFU covering a large part of the Netherlands including Province of Flevoland and Wieringermeer. These tiles were selected because of their large extent of croplands. In total 124 georeferenced samples were used to calibrate the model (73 in T31UFU tile (average SOC 10.7 g C /kg) and 51 in T32UFS tile (average SOC 16.7 g C /kg)). The uncertainty of the models (expressed as q0.05+q0.95/q0.50) was assessed via bootstrapping technique, where each model was repeated 100 times with a slightly different calibration dataset. The cross validation of the model gave satisfactory results (R² = 0.49, RMSE = 3.3 g C /kg and RPD = 1.41). The resulting SOC prediction maps show that (1) the uncertainty of prediction decreases when the number of observations per pixel increases, and reaches a minimum when more than six observations per pixel are used (median uncertainty of all pixels is 26% of predicted SOC value) and (2) the uncertainty of prediction diminishes if SOC predictions are aggregated per field (median uncertainty of fields is 21% of predicted value). Overall, this compositing method allowed to map 65% of the total cropland area and shows both realistic within field SOC patterns and regional patterns corresponding to spatial patterns in SOC reported in the literature. The first results of a validation against an independent data set show that the measured SOC content falls within the uncertainty range of the predictions for 61 % of sample points.
The recently launched imaging spectroscopy satellite mission PRISMA, symbolizes an evolution in earth observation knowledge and opens new prospects to advance hyperspectral remote sensing scientific data development in soil applications. The high spectral resolution around the visible-near infrared and shortwave infrared regions of the PRISMA mission are expected to improve the accuracy of topsoil properties retrieval from remote sensing.
The objective of this study was to evaluate the capability of PRISMA hyperspectral imager to estimate topsoil properties (organic carbon, clay, sand and silt) in comparison with multispectral imagery. To this aim, a test was carried out using topsoil data collected in Central and Southern Italy following two approaches. Firstly, simulated PRISMA, Sentinel-2 (S-2) and Landsat 8 (L8) spectral datasets were obtained from spectral resampling of a laboratory soil library, thus without disturbing factors affecting the satellites signal. Subsequently bare soil reflectance data were obtained in the experimental areas, using real PRISMA, S2 and L8 images. Imagery from the three different space missions were acquired at dates close to each other. Selective indices and thresholds were tested to detect and remove green and non-photosynthetic vegetation. Estimation models of the soil properties were calibrated employing the following algorithms: partial least square regression, cubist regression, and random forests. The prediction accuracy of soil properties estimation was assessed using k-fold cross validation.
The results of the study revealed that the comparison between hyperspectral sensor data (PRISMA) and multispectral imagers (S2 and L8) was in favor of PRISMA, both for laboratory and real data. Indeed, the simulated resampled spectra of the hyperspectral imager provided the best Ratio of the Performance to Deviation (RPD) and R2 for clay (respectively 4.12 and 0.92), sand (3.58 and 0.93), and organic carbon (1.81 and 0.77) estimation, for the spectral soil library datasets. For the bare soil reflectance obtained from real satellites imagery, a higher level of prediction accuracy was again obtained from PRISMA data, with RPD and R2 values of respectively 1.94 and 0.72 for clay, 2.57 and 0.85 for silt, and 2.43 and 0.85 for organic carbon. The statistical accuracy in the retrieval of soil organic carbon from real and resampled PRISMA data (respectively RMSE = 0.17 and 0.23; RPD = 2.43 and 1.81) revealed the capability of the real hyperspectral imager data. The results supported the expected good capability of the PRISMA hyperspectral imager for topsoil properties estimation.
The soil map of the German federal state Baden-Wuerttemberg (scale approx. 1:50 000) is based on various conventional data sources, such as soil surveys, forest site maps, geological data, and digital elevation models with derived morphological parameters. Since the launch of satellite sensors, which offer sufficient resolution in the temporal, spatial and radiometric domain, new technology is at hand providing the opportunity to map soil surface properties as efficient and low cost supplementary input data. This essentially helps to keep soil maps up to date and to support a comprehensive understanding of soil properties.
The Sentinel-2 Copernicus mission generates freely available remoting sensing data. Due to easier data access, sources like satellite data are more often implemented into research studies and application workflows. The Sentinel-2 sensor covers a spectral range between the Visible and the Short Wave Infra-Red with 13 bands. It offers a best-case pixel size of ten meters and a revisit frequency of five days. Although Sentinel-2 images complement the derivation of soil properties since 2015, the direct mapping of individual soil parameters, is still rarely looked at. Hence, the aim of the project is, to develop a mapping procedure for key soil parameters organic carbon and clay contents on fallow cropland in Baden-Wuerttemberg with special focus on analyzing spectral soil signatures.
The soil parameters are derived from their spectral characteristics in the spectrum of the Sentinel-2 data. A broad collection of reference datasets is available, comprising, for example, the Land Use/ Cover Area frame statistical Survey (LUCAS) and various in-situ datasets. Altogether, the algorithm is trained on a set of more than 800 reference points with measured soil organic carbon and soil clay content. For the preprocessing of the Sentinel-2 images, the new broadband spectral angle index (BAI) is implemented, which is also examined in ESA´s WorldSoil Experts Workshop. Based on the spectral signatures of the parameters at reference points, bare soil pixels of the whole state of Baden-Wuerttemberg are modelled regarding their soil organic carbon and clay contents using regression analysis. In addition to in-situ datasets, we recorded drone-based, hyperspectral information of the soil at selected sites in Baden-Wuerttemberg, which is used for validation purposes.
This work is embedded in the Copernicus implementation project “BopaBW – near-surface soil parameters Baden-Wuerttemberg”, aiming to establish remote sensing data as an additional source to improve official soil maps both in terms of resolution and validity. Moreover, it can be used as a variable for digital soil mapping projects.
Precision Agriculture (PA) applied on a widespread basis can be a building block for reduced ecosystem degradation without compromising food security. PA is a management strategy that gathers, processes and analyzes temporal, spatial and individual data and combines it with other information to support management decisions according to estimated variability for improved resource use efficiency, productivity, quality, profitability and sustainability of agricultural production (ispa.org 2021). One problem of farmers in the implementation of PA applications is the lack of high spatial resolution soil information. Consequently, an agricultural management that is not adapted to site specific variable conditions, like soil properties can lead to harmful emissions into the surrounding ecosystems, while at the same time not gaining a maximum yield.
The EU-funded research project ‘pH-BB: Precision liming in Brandenburg’ aims at developing innovative nutrient management strategies based on proximal soil sensing data. In the project, an 800-hectare farm in Brandenburg close to Frankfurt (Oder) was intensively monitored with geo-physical on the go sensors, like the “Geophilus-System”. The system measured the soil´s apparent electrical resistivity and soil’s natural gamma activity. Together with 344 reference soil samples that were taken in a depth of 0-30 cm, high resolution soil texture maps with a pixel resolution of 2*2 mts were produced.
For this study, the pH-BB project provided us the data of the 344 soil samples, which were analyzed in the laboratory for the texture fractions clay, silt, and sand according to DIN ISO 11277:2002-08. We used the Google Earth Engine (GEE) to process 1474 Sentinel-1 (S1) SAR scenes available at the study site during the period 2016-03-01 - 2021-11-09. To derive long-term persistent characteristics of backscatter patterns, independent of vegetation properties, the S1 data collection was used to calculate two gridded data sets. The two calculated data sets included the coefficient of variation “vv_CV” and the maximum backscatter “vv_max” along the temporal domain. The reference samples were randomly split in to a training data set (N=241; eg. 70%) and a validation data set (N=103, eg 30%. At sampling locations of the training data set values of the “vv_CV” and “vv_max” were extracted.
We applied a random forest machine learning algorithm to the training dataset to train 3 single models for the target variables clay, silt, and sand using "vv_CV" and "vv_max" as co-variables. The developed models had been further applied to the gridded datasets to calculate final maps of the target variables clay, silt, and sand.
Comparison of the prediction results with the validation data set showed that the spatial distribution of the clay and silt fraction could be predicted with a root mean square error (rmse) of 7 mass % and that of the sand fraction with a rmse of 12 mass-%. A classification of the residual errors according to the German KA5 scheme showed that especially in the sand dominated soil classes the prediction errors were lower, whereas they increased in the loamy soil classes (dependent on the clay content).
With a performance 7-12 mass-%, the approach shows good potential for surface soil texture assessments even at high resolution or for global applications or as a first guess for high resolution soil monitoring, with devices such as the Geophilus-System.
Large-scale information for soil parameters as soil organic carbon (SOC) contents in cropland soils is crucial for monitoring long-term changes related to soil health and climate change. Additionally, there is a high interest for an area-wide knowledge of SOC contents in agricultural soils at field scale for food security issues. Currently large-scale SOC maps are mostly available with a spatial resolution of 250 m to 1 km, but there is a growing demand for high-resolution data.
Here we aim to 1) predict SOC contents in croplands of Bavaria and test the spatial transferability of the prediction model to entire Germany, and 2) evaluate the increase in model performance using additional digital soil modelling techniques.
Due to permanent or temporal vegetation cover naturally exposed soils occur rarely. The Soil Composite Mapping Processor (SCMaP) has shown to be efficient to retrieve exposed soils for an area-wide mapping approach. In respect to the high spatial resolution of the Landsat EO images, it is possible to detect spatial patterns of SOC contents within fields. SCMaP is a fully automated operational technique of per-pixel based bare soil compositing to overcome the limitation of soil exposure. The approach allows the generation of cloud-free soil reflectance composites (SRC) for individually determined time periods. The SRC contains averaged exposed soil areas over several years and is used as EO database to estimate SOC contents in topsoil of croplands. We applied SCMaP to the Landsat collection data between 1984 and 2014 for all available images covering Germany. All images were processed with the same pre-processing techniques and atmospheric correction algorithms. Due to the issue of combining EO pixels (having 30 m resolution) with point information for the purpose of modeling, we developed a spatial/spectral pre-processing technique to filter the database. The main purpose of the filtering was to avoid a misalignment of the soil database and the spectral pixel information as some samples were taken close to the field borders. For the SOC modeling of topsoil croplands, the SRC spectral information is correlated with point soil data from different sources: both local and national authorities and the LUCAS (Land Use/Cover Area frame statistical Survey) data base.
We correlated the SRC spectral information to available point soil data using the Random Forest machine learning algorithm based on various data set-ups for the Federal state of Bavaria and adjacent areas (about
130,000 km²). Additionally, we investigated the influence of spectral indices to the modeling framework which increased the model capabilities (R² = 0.67, RMSE = 1.24%, RPD = 1.77). We then tested the spatial transferability of the locally trained and validated model to entire Germany and additionally developed a model using further soil point data of other parts of Germany to compare the modeling performances based on local and nationwide models. As expected, the best performance was obtained for regional models.
Multi-temporal satellite imagery as the SRC allow an expansion of the data space of explanatory variables for digital soil mapping, which is often based on terrain attributes. We therefore exemplary investigated the influence of multi-scale terrain attributes for a representative sub-area within Bavaria. The modeling results revealed scale-specific differences. Compared to terrain attributes, the use of SRC parameters lead to a significant model improvement at field-related scale levels. The joint use of both terrain attributes and SRC parameters resulted in further slight model improvements.
Based on the findings the SCMaP SRC is a promising approach to map spatial SOC contents in croplands for an entire region and is able to show meaningful patterns at field scale.
Agriculture can make a considerable contribution to reducing the CO2 content in the atmosphere by binding CO2 via crop production methods and storage in the soil through the targeted creation of humus. For carbon farming, it is important to know how much carbon is stored in the soil. This carbon is stored in the form of organic material or humus. Soil Organic Carbon (SOC) is defined as the carbon component of the organic compounds in soil and is therefore measurable.
Deriving Soil Organic Carbon content from optical EO data offers the unique opportunity to track the carbon storage in the soil for wide areas in a spatially continuous way and with regular updates. This allows an annual monitoring to see whether carbon conservation efforts led to an increase in stored organic material, or whether on the other hand, depletion of organic materials took place. This information can be used both by farmers directly to keep their soil fertility at optimum or increase carbon storage in the soils with specific measures if there is still the possibility of improvement. Unlike other methods, which are based on soil sampling and deliver only an average organic carbon content value per field, the information from EO is available for every point in the field and allows targeted measures. Thus, this is one step in a bigger system of deriving best practice solutions concerning e.g. soil management, seeds, crop rotation, irrigation, which bind carbon from the atmosphere into the soils and thus help combat climate change, to find the most resilient and climate neutral future farming.
Within this project, Soil Organic Carbon content is calculated from Copernicus EO data, more specifically Copernicus Sentinel-2 imagery, using VISTA’s pre-processing chains including an atmospheric correction, the filtering of bare soils as well as an algorithm called “HAI” developed by VISTA for the SOC derivation.
For this application, it is necessary that the soils are bare at some point in the crop management cycle. This means that they need to be also free of stubble that in the relevant spatial resolution (10m) cannot be unmixed easily from the signal of the soil itself. All standard best practice farming systems where tillage and other soil cultivation methods play a role can be covered, because they want to create a unified upper soil layer (‘tillage horizon’) in which the soil gets mixed so well that its properties are similar throughout this ca. 40 cm deep layer. This means, that even though with optical EO data only the top of the soil layer can be seen, this top layer has the same characteristics as the most important soil horizon. However, the challenge is to distinguish between areas covered by vegetation and those, which are not covered, especially the separation of areas with harvest residues and those without harvest residues. The simple use of commonly applied vegetation indices like the NDVI alone is not able to serve the purpose of the selection of areas of completely bare soils, because both can result in similar NDVI value ranges. Thus, VISTA applies the integrated soil-leaf-canopy reflectance model SLC for the analyses of the obtained reflectance spectra. The input parameters to SLC comprise structural and physiological information on the vegetation, soil optical properties and the observation geometry. A non-Lambertian soil BRDF sub-model for the soil reflectance and its variation with moisture is incorporated in SLC. (Verhoef & Bach 2007)
The algorithm “HAI” is then applied to the spectral reflectance of all pixels which show bare soil, are not covered e.g. by clouds as well as fulfil a defined value of spatial homogeneity. The dataset used for calibration contains more than 500 fields with about 11000 georeferenced POIs with laboratory measurements of humus content distributed over a 60 x 30 km wide area. The absorption quantities of organic material are then related to in-situ measurements of SOC. Calculated HAIs are thus calibrated to state-of-the-art techniques of soil probing with one half of the dataset and validated with the other half of the dataset. The algorithm concludes in correlations higher than 0.9 R², both for calibration and validation. However, more importantly the preliminary average absolute error amounts to only 0.07 % SOC. This is nearly as accurate as scientific laboratory tests (0.041-0.065 % SOC) (Nätscher et al), and more accurate than the range of the increase (0.086%) of SOC content with optimum regenerative farming measures in a period of several years as stated by carbon certifying companies.
Since each agricultural field shows bare soil at a different time of the year, continuous SOC maps with high accuracy are created by using temporal fusion methods for combining time-series of Sentinel-2 observations.
The technical requirements for this service to be successful are that the error margin of the EO based assessment needs to be on par with laboratory measurements and smaller than the margin of possible change in the soil carbon content when specific cultivation measures to increase humus are taken. This is a requirement that even in-situ measurements have a hard time fulfilling sometimes, but here, the EO analysis has the advantage of big data over sparse in-situ measurements. Combining both EO and in-situ data through data fusion brings the key success factor.
This study is funded by ESA under Contract No. 4000134139/21/NL/MM/mr.
Conventional high-detail soil maps are static and often based on obsolete data in relation to the time of use. In the framework of the European Joint Programme (EJP) SOIL, the EJP-STEROPES project initiated in February 2021 gathers 14 countries (https://ejpsoil.eu/soil-research/steropes/). As multispectral satellite series such as Sentinel-2 time series are now freely available with a weekly frequency, STEROPES aims to assess the potential to predict cropland soil organic carbon content from such satellite data over various pedoclimatic conditions and cropping systems across Europe. While encouraging performances were recently obtained from Sentinel-2 (S2) for temperate soils of Europe and annual crop systems, little is known about the S2 capabilities for many other soil types and agroecosystems across Europe. Therefore, the focus lays on a detailed mapping that could serve as key information in decision making for farmers, governmental institutions and agricultural advisers, or other stake-holders involved in land planning.
Several datasets have been collected focusing on small regions of some hundreds of km² or on detailed scales of farms or catchments of some km², for which soil organic carbon samples were already available with an areal density higher than 1-3 samples/km². Spectral models were constructed from the reflectance image spectra of optical satellite series, using several commonly used algorithms including partial least squares regression (PLSR), support vector machine regression (SVM) and random forest (RF).
Overall encouraging performances have been obtained (RPIQ >1.7), but vary in time and geographical space according to several factors especially soil moisture, texture, dry vegetation due to management practices, and salinity. The following stages of the STEROPES project include the analysis of each of these disturbing or influencing factors and their joint effect.
The project is closely linked with the achievement of task 6.4 within WP6 in the EJP Soil, which aims at developing methods for accounting, monitoring and mapping agricultural soil carbon, fertility and degradation changes, with particular focus on using innovative inventory techniques such as proximal sensing integrated with current and upcoming satellite products.
Wildfires constitute one of the most major environmental problems, causing soil degradation and desertification, by destroying plant material and the litter layer. The severity and duration of a wildfire play a leading role in forests’ management (Karamesouti et al., 2015). Having damaged irreversibly the ecosystems by their dramatic impact on soil vulnerability to flooding and erosion events, it is vital to be studied. Soil erosion is a complex environmental process that depends on soil properties (e.g., slope, vegetation) and it significantly changes the natural soil operation since it detaches and accumulates soil particles from one area to another. According to Montgomery (2007), soil erosion’s detrimental impact on agroforestry areas was distinguished since the Plato and Aristotle era, while many current studies ascribe bare rocky slopes to past soil erosion. Natural processes, such as water from the rainfalls, wind, and particularly wildfires, as well as non-anthropogenic ones, such as urban expansion and uncontrolled land cultivation, are considered to be leading to soil loss. Thus, there is a necessity of developing geospatial methods on these parameters that have been affected, in order to analyze spatially fire effects on soil properties.
This study provides a framework to get useful information on the effect of wildfire incidents on soil characteristics in a local-scaled area using Geoinformation technology, which provides a very useful tool on spatial patterns and data visualization. Specifically, this study aims at assessing the post-catastrophic effects after a wildfire, giving emphasis on both the severity of the fire event regarding the affected ecosystem, as well as its impact on soil properties. Particularly, as a case study was used the wildfire which broke out in 2014 on a semi-mountainous area in Central Greece, in Malesina, that burned about 30 km2 of agroforestry area. The analysis's main goal is to visualize the soil loss rates due to erosion procedures following the wildfire. In particular, advanced erosion modeling techniques have been applied to raster and vector data in order to develop maps that allow observation of the spatial variability in the study area. The analysis of the data concerns between three time periods,
2013, 2015, and 2020, divided into two seasonal periods each (season and summer).
In order to gain a better insight into the severity of the fire incident, Earth Observation satellite images were acquired so as accurately locate the burned area through the production of the Burn Severity Index. More specifically, two Landsat-8 products, before and after the fire, acquired from the United States Geological Survey (USGS), were processed and analyzed using the Normalized Burn Ratio (NBR), which identifies burned areas due to its combination of near-infrared (NIR) and shortwave-infrared (SWIR) spectral areas. Furthermore, the pre-fire and post-fire NBR products were implemented into the differenced NBR (dNBR) in order to classify the severity of the fire according to the Burn Severity index classes.
For the estimation of soil loss rates after the wildfire, soil erosion advanced modeling has been developed through the application of the Revised Universal Soil Loss Equation (RUSLE), which is based on climatic, land cover, soil type, topography, as well as Earth Observation (EO) data. In this context, rainfall data were obtained from the Meteo website of the National Observatory of Athens (NOA) concerning five meteorological stations surrounding the study area, while soil erodibility data acquisition was implemented from the European Soil Data Center (ESDAC). In particular, spatial interpolation methods have been applied to these point datasets concerning the rainfall erosivity and the soil erodibility factors, so as to allow their analysis spatially. Additionally, land cover data was derived from the Corine database. The topography of the study area was based on the Hellenic Cadastre’s Digital Elevation Model (DEM). Furthermore, additional EΟ Sentinel-2 products were obtained through the European Union Copernicus Program.
The key findings of this study concerned the Burn Severity index analysis of the study area, as well as the RUSLE final outputs. According to the severity of the fire concerning the study area, moderate-low severity values seem to have affected 18,15 km2, which accounts for about 67% of the total burned area and corresponds mostly to agroforestry areas. Regarding the obtained values of the RUSLE model, maximum soil erosion rates were concentrated one year after the wildfire break-out, especially in the winter period, where there was a 30% rise. As stated by Lourenço et al. (2012), the soil is more vulnerable to erosion within a year after the fire, which validates our study’s findings regarding the year 2015 as the most erosive one. On the contrary, in 2020, five years after the incident, a reduction was observed of about 60%. In addition, areas especially vulnerable to high levels of soil erosion were identified to locations
with direct tributaries to the major streams and steep sloping zones. The methodological framework presented, underlined the significance of geospatial technology in combination with EO data in assessing the postfire effects on soil characteristics. Such a conventional method, which is mostly applied in large areas, is also quite useful for smaller ones regarding the wildfires and it could probably be the same for other natural disasters, such as flood events. Thus, a direct survey with this geospatial technique could indicate the highest-risk areas as for the post-incident’s effects. Under this aspect, it may contribute to national treaties and policies related to the Sustainable Development Goals and the European (EU) Strategy for Soil Protection.
Soil moisture describes the amount of water held between soil particles. Monitoring soil moisture evolution is of fundamental importance to a wide range of civil and commercial sectors concerned with weather forecasting, climate dynamics research, runoff potential, and flood control, surface susceptibility to erosion, reservoir management, early warning of droughts, irrigation scheduling, and crop yield forecasting. Satellite-based remote sensing acquires measurements from a vantage view of the space and therefore provides an economic way to achieve continuous monitoring of soil moisture change at a global scale. To date, quantitative measurements of soil moisture in the surface layer of soil have been most successful using active/passive remote sensing in the microwave region (e.g., Soil Moisture Active Passive (SMAP)). However, retrieval of accurate soil moisture is challenging due to complications caused by surface roughness and vegetation cover.
Active radar instruments such as Synthetic Aperture Radar (SAR) produce their own energy source and collect the backscattered energy after interacting with the targets on the ground including the earth's surface. Since the backscattered signal is sensitive to the water content of the media interacted with the radar signal, SAR measurements can be used to infer soil moisture. Often, only SAR amplitude is used to estimate soil moisture- e.g., the Copernicus Global Land Service operation uses an amplitude change-detection approach. Recently, Jordan et.al. [2020] and Brgi and Rowena [2021] explored the use of InSAR coherence time-series to estimate the temporal evolution of soil moisture change after. However, the potential of the interferometric phase for estimating soil moisture has not been fully understood.
Repeat-pass Interferometric phases are generally dominated by the propagation delay of the microwave signal through the atmosphere. However, such propagation delay contributions together with other interferometric components such as ground movements usually cancel out in a closed loop triplet of three multi-looked interferograms formed from three SAR acquisitions commonly called closure phases. The cause of non-zero closure phases remains unclear. While Molan et al [2020], relate non-closure phases to pure statistical properties of SAR measurements, DeZan et al [2015] consider the non-linear response of the interferometric phase to soil moisture as the source of closure phases.
We demonstrate that the heterogeneity within the averaging window of interferometric SAR measurements over distributed scatterers result in non-closing triplets of interferometric phases which leads to discrepancies between distributed scatterers interferometry (DSI) and Persistent scatterers interferometry (PSI). We present a two-layer model which can explain the observed closure phases as a result of heterogeneities caused by soil moisture change within the averaging windows. We validate our model with InSAR time-series observations over Barstow-Bristol trough, California where the cumulative InSAR time-series bias (i.e., the difference between the short temporal baseline DSI and PSI) correlates with cumulative precipitation indicating that moisture change is the main driver of the closure phase and observed InSAR time-series bias. This observation and model provide an opportunity to further explore the potential of the interferometric phase closure for soil moisture estimation.
Global monitoring of soil moisture has been recognized to be essential to improve our understanding of the earth’s water and energy cycle. Soil moisture information is valuable for a wide range of applications. It is a key variable for better weather and climate forecasting and optimized agriculture. In recent years, several active and passive remote sensing techniques have been developed that use the observations from different satellite missions to measure soil moisture dynamics from space. ESA’s Soil Moisture and Ocean Salinity (SMOS) and NASA’s Soil Moisture Active Passive (SMAP) L-band missions were the first dedicated missions for soil moisture monitoring. Both missions have already exceeded their nominal service life, with no follow-up continuity missions planned. There is also a need to improve the data quality and the spatial and temporal resolution of the soil moisture measurements in order to use them operationally in hydrological and agricultural applications.
One of the recently developed techniques for soil moisture measurements is using the reflected signal from Global Navigation Satellite Systems (GNSS). The GNSS-Reflectometry technique, as opposed to conventional backscatter measuring radars and radiometers, makes use of the forward scattering of the transmitted energy, and therefore it is less sensitive to surface roughness and vegetation. The potential of space-borne GNSS-R observations for ocean and land applications has been demonstrated by recent GNSS-R missions, including the Spire’s GNSS-R satellites constellation and the NASA Cyclone Global Navigation Satellite System (CYGNSS).
Spire Global operates a constellation of CubeSats that conduct GNSS-based science and earth observation and currently has four GNSS-R satellites in orbit, which were launched in December 2019 and January 2021, with plans for a full operational constellation of GNSS-R Satellites in the near future. Spire has also developed a change detection algorithm that uses observations from GNSS reflectometry instruments to measure soil moisture dynamics.
In this study, we present the results of a simulation data analysis that was carried out to determine the requirements for an operational soil moisture monitoring system with a constellation of GNSS-R satellites. Furthermore, we will highlight the capabilities of the Spire GNSS-R satellites and the advantage of the synergetic use of Spire's GNSS-R CubeSats and other space-based L-band sensors for an operational monitoring of soil moisture.
Climate change, air pollution, food and energy resources are major global emergencies that require a deep understanding of their degree of interdependence and the finding of smart and optimal solutions on a global scale. The physico-chemical phenomena produced in the soil, at ground level or in the atmosphere, related to the spatio-temporal evolution of temperature, humidity or some pollutants that affect the radiative budget and the evolution of ecosystems, require more and more complex technological approaches, tools and analyzes, on the ground but also at a very high altitude. Among the known technologies in the analyzing of the Earth's atmosphere, both optical passive (solar) and active (laser) instruments play a pivotal role in deciphering the mechanisms of the interaction of solar radiation with Earth: complex optical instruments coupled with ICCD cameras and having high spectral accuracy and short integrate time. To answer some of the most critical issues related to Earth observation meaning highly accurate and trusted climate records, to help constrain the uncertainties in predictions of climate forecast models, a new satellite mission (TRUTHS) of the ESA Earth Watch program facilities are developed. TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) meet to resolve and to ensure the best quality solar reflective spectroscopy measurements in range of (320 nm – 2400 nm). Solving the spectral coefficient of reflectivity of some given, cultivated surfaces, of some special soil categories, of the water gloss as a function of relief, of the climatic conditions and especially of the pollution conditions, requires also laboratory researches and field campaign, to find the correlation, calibrate and validate the TRUTHS satellite measurements.
Because air pollution plays a key role in this study, especially in terms of the negative influence on crop growth - important biofuel resources (horn, rapeseed, sunflower, hemp, etc.), ground measurements and complementary TRUTHS-hyperspectral data satellite will needed, too. Therefore, air, soil and water quality analysis and analysis of different crops cultivated in agriculture, both by spectroscopic means (Fourier Transform Infrared Spectroscopy - FTIR; Laser Induced Fluorescence - LIF and Scanning Electron Microscopy coupled with Energy Dispersive X-Ray Spectroscopy - SEM-EDS), will provide information on the environmental physio-chemical factors and their influences on the plants growth, on their viability in certain conditions but also on changes that could occur in their “bio-design” and accumulation of contaminants in their tissues.
Several aspects arise from the behavior of plants to the influence of environmental factors. One of the aspects is related to a degeneration to the point of destruction of agricultural crops or spontaneous plants in areas exposed to pollutants in significant concentrations. Another aspect refers to the adaptation and survival of the plant even in conditions of "poisoning" with chemical compounds from anthropogenic activities, which can further induce the effect of toxicity to humans and to animals, from the latter the transfer of toxic substances ending most of the time also in humans. Plant uptake of contaminants from soil, even groundwater, but also from surface water can be used in an environmental decontamination technology. This is known in agriculture as crop rotation, and plants such as flax, hemp and others have been used for this purpose.
Regarding the listed aspects, it is important to know exactly the chemical components present at a given moment in the air, water, soil, and the interactions between the various compounds as well as their effects on the various types of vegetation. Here comes the role of the mentioned spectral analyzes (FTIR, LIF, SEM-EDS). Fourier Transform Infrared Spectroscopy - FTIR is a technique designed to investigate functional groups, including entire molecules based on their absorption spectra in infrared. Important information can be obtained even for samples with a mixture of compounds, including, as is the case with mixtures of pollutants in water, but also on biocomposites. Analysis of functional groups by FTIR absorption spectroscopy is supplemented by the use of Laser-Induced Fluorescence spectroscopy, LIF. The LIF technique highlights both specific groups and the reactions that take place between certain compounds, as well as the induction of excimers or mirrorless lasers, with special importance especially in the analysis of biological or biocomposite materials as evidenced by the analysis of laser-induced fluorescence of keratin or turmeric with 355 nm wavelength laser beam. The analysis of the elemental chemical composition, coupled with the imaging of the surface morphology and topography of the materials resulting from the samples taken from air, water, soil or plant material performed by Scanning Electron Microscopy coupled with Energy Dispersive X-Ray Spectroscopy, SEM-EDS technique brings information especially about the distribution of compounds on a given section, but especially about affinities and aggregations of a physico-chemical nature between the various structures.
The method of preparing the samples for highlighting and analysis through the three mentioned techniques is different depending on their phase state and it was presented by our laboratory team in published papers.
References
1. I. Cocean, A. Cocean, F. Iacomi, S. Gurlui, City water pollution by soot-surface-active agents revealed by FTIR spectroscopy, Applied Surface Science 499 (2020) 142487, https://doi.org/10.1016/j.apsusc.2019.04.179
2. I. Cocean, A. Cocean, C. Postolachi, V. Pohoata, N. Cimpoesu, G. Bulai, F. Iacomi, S. Gurlui, Alpha keratin amino acids behvior under high fluence laser interaction. Medical applications, Applied Surface Science 488 (2019) 418–426, DOI: 10.1016/j.apsusc.2019.05.207
3. A. Cocean, I. Cocean, N. Cimpoesu, G. Cocean, R. Cimpoesu, C. Postolachi, V. Popescu and S. Gurlui, Laser Induced Method to Produce Curcuminoid-Silanol Thin Films for Transdermal Patches Using Irradiation of Turmeric Target, Appl. Sci. 2021, 11(9), 4030. https://doi.org/10.3390/app11094030
4. N. Fox et all, Traceable radiometry underpinning terrestrial- and helio-studies (TRUTHS), Advances in Space Research, Volume 32, Issue 11, December 2003, Pages 2253-226, https://doi.org/10.1016/S0273-1177(03)90551-5
Variations in dielectric properties of the surface affect both the phase and amplitude of Synthetic Aperture Radar (SAR) observations. Soil moisture, which is determined by both bound and free water content in the soil, has a substantial impact on the dielectric properties of bare soil or sparsely vegetated land surfaces. Soil-moisture models for each radar observable were developed using a dielectric mixing model that relates dielectric constants to volumetric soil moisture. Because both amplitude and phase react to the same land-surface property, a joint amplitude-phase soil-moisture model can be developed. This model would potentially be able to provide soil-moisture estimates with high spatial resolution.
To separate geophysical processes, like soil-moisture variations, from geometrical processes that induce phase variations, the phase closure concept was introduced. A set of three SAR images are interfered with each other circularly to form three multilooked interferograms, and the sum of the estimated expected values of the three interferometric phases are referred to as the closure phases. Because the phase closure associated with low coherence makes it an inherently noisy observable, an adaptive multilooking strategy that reduces noise while preserving the physical meaning of different families of scatters is required.
The adaptive multilooking involves of a non-parametric Anderson-Darling (A-D) test on the amplitude data supported by land cover maps, which consists of three steps: 1) the A-D test applies on data stacks to cluster scatterers into groups within which pixels respect the statistical homogeneity; 2) the land cover map is used to check the performance of A-D test and label each class with vegetation type, and 3) amplitudes and closure phases observables are multilooked with the classified polygons.
The goal of this study is to complete the first step in the development of a joint amplitude-phase soil moisture model using multi-looked SAR datasets and soil moisture measurements. For this purpose, we quantitatively examine the correlation among soil moisture, amplitude and closure phase observations. The signatures of soil moisture variations in each type of observation are summarized. Furthermore, because currently used models have the same limitations, such as the lack of a vegetation contribution term, the effect of different types of vegetation are inspected. Based on the results, recommendations are made for future research into the development of a joint soil moisture SAR model.
Assessing the potential of semi-empirical estimates of evapotranspiration from peatlands in Bavaria by means of remote sensing
Peatlands occupy 3% of the earth’s surface, but store 16-30% of the soil carbon worldwide; they are thus highly relevant in the mitigation of climate change. Emissions from peatlands strongly depend on the water table. Next to precipitation and groundwater levels, evapotranspiration (ET) is the main control of the water table level. The knowledge of ET from peatlands helps defining the water balance and hence managing carbon releases. In the recent past, Bavarian peatlands receive more and more attention, also on political levels, as a change from carbon source to sink is considered a relevant instrument to reach the ambitious climate change mitigation targets.
In the framework of the KliMoBay project (Climate protection and adaption potential in peatlands in Bavaria), funded by the Bavarian State Ministry of the Environment and Consumer Protection through the European Regional Development Fund, spatially-distributed ET was modeled over the peatland “Schechenfilz” (latitude: 47.80629/ longitude: 11.32758) in the South of Bavaria from 2015 to 2020. Eddy Covariance Tower measurements are available only in some of the Bavarian peatlands. The project however needs spatially explicit ET information for all peatland areas in Bavaria. Therefore, the study uses remote sensing techniques for full spatial coverage.
The following two models were tested and compared: i) Triangle Method by Jiang und Islam (1999) and ii) DATTUTDUT (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature) model by Timmermans et al. (2015). Both are based on the land surface energy balance as well as land surface temperatures (LST) to calculate ET. The DATTUTDUTT model only needs LST information for the calculation but can be supported with auxiliary observational data. The Triangle Method typically depends on measured ground data. Here ground measured net radiation was used to determine the soil heat flux and the sensible heat flux and air temperature was used for calculating latent heat vaporization. Thereby, the Triangle Method uses an extension of the Priestly-Taylor equation and a relationship between LST and NDVI. LST and NDVI were calculated with Landsat 8 data.
Both model findings were compared with ground measured. Results showed that the DATTUTDUTT model has the best results, when using ground-measured net radiation (RMSD of ET [mm/h] = 0.11). The retrieval of ET without ground measurement presented also good results (RMSD of ET [mm/h] = 0.18). The DATT-model shows a good usability for wet landscapes. The Triangle Method was initially designed for applications in arid and semi-arid regions and was thus outperformed by the DATTUTDUT model. The Triangle Method gives the best results (RMSD of ET [mm/h] = 0.26), when the relationship of LST and NDVI fits a triangle between extreme wet and dry conditions of the existing data points. Tests showed that an improvement of the triangle calculation, adjusted to wet conditions, also improves the results. In comparison to the DATTUTDUTT model, the Triangle Method shows higher spatial heterogeneity in the peatland area. However, both models show plausible results and bare promising potential for modelling ET over Bavarian peatlands.
Calculating spatially distributed ET for the Schechenfilz area independently from ground measurements represent the initial step towards calculations for whole Bavarian peatlands. In view of climate change, the understanding of peatland surface-atmosphere interactions must be improved, especially the feedback of peatlands on global climate change.
References
Jiang, Le; Islam, Shafiqul (1999): A methodology for estimation of surface evapotranspiration over large areas using remote sensing observations. In: Geophysical Research Letters (17), S. 2773–2776.
Timmermans, Wim J.; Kustas, William P.; Andreu, Ana (2015): Utility of an Automated Thermal-Based Approach for Monitoring Evapotranspiration. In: Acta Geophys. 63 (6), S. 1571–1608. DOI: 10.1515/acgeo-2015-0016.
Effective measurement and management of soil organic carbon (SOC) are essential for ecosystem function and food production. SOC has an important influence on soil properties and soil quality. Conventional SOC analysis is expensive, time-consuming and difficult. The development of spectral imaging sensors enables the acquisition of larger amount of data using a cheaper and faster method. In addition, satellite remote sensing offers the potential for performing surveys more frequently and over larger areas. This research aims to measure SOC content with colour as an indirect proxy. The measurements of soil colour were made at an agricultural site of the Czech Republic with an inexpensive digital camera and the Sentinel-2 remote sensor. Various soil colour spaces and colour indices derived from the (i) reflectance spectroscopy in the selected wavelengths of the visible (VIS) range (400–700 nm), (ii) RGB digital camera and (iii) Sentinel-2 visible bands were used to train models for prediction of SOC. For the modelling, we used the machine learning method, random forest (RF) and the models were validated with repeated 5-fold cross validation. For prediction of SOC, the digital camera produced R2 = 0.85 and RMSE = 0.11, which were higher R2 and almost similar RMSE compared to those obtained from the spectroscopy technique (R2 = 0.78 and RMSE = 0.09). Sentinel-2 predicted SOC with lower accuracy than other techniques; however, the results were still fair (R2 = 0.67 and RMSE = 0.12) and comparable with other methods, particularly for soils with more SOC. Colour measured with a digital camera enabled accurate and reliable predictions of SOC, overcoming limitations of more traditional laboratory methods of SOC analysis.
UK peatlands are of great environmental importance, they are a major carbon store locking-in approximately 3.2 billion tonnes of carbon and cover 12% of UK land area (CEH, 2021). Wildfire disturbance in UK peatlands is of growing concern. The European Forest Fire Information System (EFFIS) reported 111 burned areas for the UK in 2019 burning a total of 28,754 hectares. This is the highest total area burnt since EFFIS burned area monitoring began in 2008 with the fire season of 2018 providing the second highest record of area burnt, totalling 18,472 hectares.
This research is part of the Natural Environment Research Council (NERC) funded project “Towards a UK fire danger rating system: Understanding fuels, fire behaviour and impacts” (https://ukfdrs.com/). Work package 1 focuses on the use of Earth Observation techniques to assess (a) the spatial distribution of vegetation fuel-loads across the UK and (b) to develop a dynamic fuel map based on seasonal change and land cover management in the South Pennines, England. This research provides some initial investigation of wildfire occurrence and landscape disturbance for the Marsden Moor Estate in the South Pennines from 2019 - 2021.
The Marsden Moor Estate owned by the National Trust in West Yorkshire, UK, is a Site of Special Scientific Interest (SSSI), a Special Area of Conservation (SAC) and Special Protection Area (JNCC, 2021) (https://sac.jncc.gov.uk/site/UK0030280). This blanket bog habitat is home to rare upland species such as the mountain hare and red listed Birds of Conservation Concern 4 (BoCC4) such as the lapwing, skylark and the curlew (British Trust for Ornithology, 2021). Since 2019, the National Trust reported a total of £700,000 worth of damage caused by wildfires on the Marsden Moor Estate (National Trust, 2021). Over the past three years there have been large wildfire events (26 February 2019, 22 April 2019, 23 March 2020 and 25 April 2021) with the biggest fire in April 2019 with a reported 700 hectares of peatland damaged degrading this fragile landscape (National Trust, 2021).
This paper presents a multisensor and multitemporal approach to monitor wildfire occurrence and to assess the impact of these events at the landscape scale at Marsden Moor. We examined (a) the hydrological and topographic characteristics of the estate and (b) the dynamics of vegetation conditions during 2019 – 2021. The estate had a 0.5m spatial resolution LiDAR survey commissioned in 2013 but there was limited onsite knowledge of how to process this data and create derived hydrological and topographic products. The hydrological and topographic assessment used LiDAR derived products for obtaining slope, aspect and a Topographic Wetness Index (Beven and Kirkby, 1979). The Topographic Wetness Index was important in spatially mapping drier v's wetter areas across the peatland, with drier areas potentially being areas of degradation. Previous to this work, there was no multitemporal analysis of vegetation greenness, vegetation stress and regular mapping of areas of bare ground for the estate. Landsat 8 Operational Land Imager (OLI) and Sentinel-2A/-2B data were used to quantify the total area burnt from 2019 – 2021. Furthermore, these optical sensors were analysed to assess land cover dynamics and to assess post-fire vegetation recovery using spectral indices e.g. Normalised Burn Ratio (NBR), Bare Soil Index (BSI), Enhanced Vegetation Index and Normalised Vegetation Index (NDVI).
Vision-1 four-band (visible and near-infrared channel) multispectral 3.5m and panchromatic 0.87m data was acquired by Airbus UK Satellite Services Operations Team on 31 July 2020 and 8 November 2021. The Vision-1 data was atmospherically corrected using ENVI Quick Atmospheric Correction (QUAC) tool with NDVI and EVI spectral indices generated. The Vision-1 acquisitions provided a unique opportunity to generate 3.5m resolution NDVI and EVI maps to compare with the medium resolution Sentinel-2 and Landsat 8 OLI time series. The Vision-1 data enables a more detailed assessment of the Marsden Moor landscape to check if any subtle differences in vegetation greenness are visible especially for areas previously burnt which may not be possible with the Sentinel-2 and Landsat 8 OLI data.
These research outputs will support peatland restoration decision making at the Marsden Moor Estate by identifying areas of vegetation stress and understanding where dry areas are located across the estate which could benefit from peatland rewetting interventions e.g. installation of timber dams and sphagnum moss plug planting. In addition, the estate will gain a more comprehensive understanding of the wildfire and fuel dynamics taking place in this upland peatland ecosystem.
The use of ultra high-resolution UAV-data for image analysis and classification in remote sensing has advanced significantly over the past years and offers great possibilities to investigate ecosystems. The computation of habitat and vegetation community maps plays a significant part in ecosystem research for further extrapolation of ecological processes, such as gas fluxes and carbon storage. Peatlands, however, represent a challenge in terms of their microtopography and spectral characteristics of particularly Sphagnum mosses.
The spectral complexity of this ecosystem is a challenge for machine learning and requires the careful consideration of input data and algorithms. The stage of training a selected classifier is the most crucial and time-consuming step in image classification. Therefore, this study aimed to a) evaluate ecological and spectral information (i.e. spectral indices) needed to perform pixel-based image classification, b) assess the accuracies of the classifiers Support Vector Machine (SVM) and Random Forest (RF) and c) investigate the application potential of spectral information from one mire to another. For this purpose, field work in three selected North Karelian mires was conducted in 2020. Drone data (RGB and multispectral with a spatial resolution of 0.01cm and 0.05cm, respectively) was gathered in July and August 2020 from each study site.
The vegetation abundance and species dominance in 60-100 vegetation plots with corresponding water table depth (WTD) and RTK-GPS location was recorded. The detailed vegetation inventory allowed classification on habitat type and vegetation community level. For that, vegetation data was grouped by I.) physiognomy and II.) hierarchical clustering of species. The outcome of the latter was additionally compared to the results of spectral clustering. Those grouping approaches serve as the basis for labelling the training samples in the classification process; therefore, each was run with both classifiers and tested for its reliability. An accuracy assessment for each classification output was generated. Lastly, we used the training data that showed the most satisfying results and applied it to another mire site, of which vegetation plot data existed for validation purposes.
Preliminary results suggest satisfying results on habitat and vegetation community level. For each community, the identification of key species supported the classification process. Although it is useful to include many spectral features, it is likely not necessary to generate all spectral indices we included. The application of spectral information from one mire site to another indicates high potential on habitat level and even to some extent on vegetation community level. However, this sort of blind test suggests additional validation data. The importance of extensive field work is therefore still evident and emphasizes the necessity to combine traditional field work and remote sensing methods. For further classification methods, we suggest the usage of ancillary data to increase the accuracy and to produce information on species and diversity level. This could involve data from different drone and handheld camera sensors (i.e. thermal and hyperspectral), and also models derived from Structure from Motion (SfM) photogrammetry, such as vegetation height model (VHM). In addition to that, object-based classification of low altitude UAV-data should be considered to extract texture information.
The main goal of this study was to determine the Net Ecosystem Exchange (NEE) at the Biebrza Wetlands for sedges, reeds and grasses habitats, using meteorological, soil and vegetation parameters. The NEE model, to get its distribution throughout the day, was elaborated with the use of measured PAR, and modelled respiration RESP. Parallelly, the CO2 flux was measured at the Eddy Covariance (EC) station. Based on these measurement data, a model was created combining soil moisture (SM) and latent heat (LE). The data from EC was use to compared the results from modelled NEE and CO2 flux. On the basis of two obtained models, it was calculated that the average absorption of CO2 from April to October for all studied plant communities is at the level of -1.21 to -2.12 µmol m-2 s-1. Later, we focused on the determination of a satellite based NEE model for the area of the Biebrza Wetlands. Firstly, maps of parameters present in the ground-based models were created with the use of satellite images (from the Sentinel-1, -2, -3 and Terra MODIS). Secondly, based on satellite data acquired in 2015-2020, a statistical models (presented in paragraph 3.6) were obtained which combine parameters such as: soil moisture, NDVI, NDII, APAR, evapotranspiration, air and soil temperature. All data were extracted using the R programming language script and modelled with Statistica program. The models presented as a result of this research can be used applying the satellite data only.
The key findings of the research are:
● Modeling of NEE was performed using modeled GPP, based on PAR, and modeled RESP.
● It was important to present how the measurements done by EC method are compatible with the results from chamber measurements: for all habitats, the mean values of NEE by chamber measurements is -1.21 µmol m-2 s-1, while for EC is -3.24 µmol m-2 s-1; for all habitats, the modelled NEE mean value is -2.26 µmol m-2 s-1 and for EC modelled is -4.19 µmol m-2 s-1.
● The grass vegetation is characterized by the higher absorption of CO2.
● The models of NEE and GPP were created with the input of satellite data: NDVI, Ead, LE, SM, NDII, (Ts-Ta), APAR – from Terra MODIS, Sentinel-1, -2 and -3.
● It was assumed that NDVI, APAR characterises vegetation biomass, NDII – soil-vegetation moisture, LE – latent heat (evapotranspiration conditions), Ead – daily evapotranspiration conditions for evapotranspiration, (Ts-Ta) – soil moisture conditions, SM – soil moisture.
● The models presented as a result of this research can be used applying the satellite data only.
The research work was conducted within the project financed by the National Centre for Research and Development under Contract No. 2016/23/B/ST10/03155, titled "Modeling of carbon balance at wetlands applying the newest ESA satellite missions Sentinel-1/2/3".
The use of radar has been proven for forest fires, but moorland fires result in a less significant change in habitat structure. Notably, in England, it is recognised that it could be hard to distinguish between heather burning and heather cutting, but the critical thing is to identify
the change; it may be that optical data would be used to confirm the type of change.
In response to a Department for Environment & Rural Affairs (DEFRA) Invitation To Tender, a six-month research project focused on developing techniques for identifying burn scar areas from the Sentinel-1A and -1B satellite data. Workflows were implemented in Jupyter Notebooks (available from https://github.com/pixalytics-ltd/upland-burn-detection) shared within the team via a Jupyter Lab instance. The notebooks could be run individually or in sequence to downloaded, pre-processed and applied detection algorithms; the coherence processing was undertaken using ESA's SNAP Toolbox called via its GPT interface. The outputs from the algorithms were then compared to imagery for known burn areas, and discrepancies were investigated so that an iterative approach could be used to develop the final algorithm. Three case study areas were used for testing and analysis: Isle of Skye and Eastern Cairngorms in Scotland and England's Peak District National Park (PDNP).
Results showed that burn scars were visible in coherence data. Burn areas showed low coherence in image pairings that covered the burn date, followed by high coherence in the images that followed, presumably due to lack of vegetation/growth. In fact, in some cases, the burn scar was still visible the following year.
The next step was to develop an automated detection algorithm that could be used to process time-series datasets. However, coherence data are normalised between 1 and 0 for each image, which introduces a potential hurdle as images will all have the same scale range regardless of their relative coherence. Therefore, the first step was to reverse this normalisation. Although this is impossible to do correctly without the initial maximum and minimum values used to normalise the image, the effect could be replicated by dividing each image by its median value. Instead of looking at the difference between consecutive images, an average coherence value was created for every pixel across the date range used. Then, the absolute difference between each image and the temporal mean image was calculated before all images were summed. A threshold was applied to extract the most changed pixels likely to be burn areas.
In summary, the findings suggest that burn areas' detectability with coherence improves over one year following fire. However, further ground-truth work on post-fire regrowth is needed to understand the mechanisms responsible for this response. The Jupyter Notebooks have been made publicly available and will continue to be developed for this application alongside being reused in future collaborative projects.
The Central Congo basin houses the largest peat swamp in the tropics (the Cuvette Centrale). Hardwood tree species and trunkless palms are the two types of swamp vegetation present, with each dominating in different regions. The circumstances under which each vegetation type is preferred were not well known.
We used a beta regression model to assess the contribution of seasonal rainfall totals and temperature to the presence of palm swamp in the Cuvette Centrale swamp regions. This involved use of 5 km rainfall (CHIRPS) and maximum temperature (CHIRTS) climatological data together with MERIT Hydro 90 m terrain data, a 50 m land-type classification map and HydroBasins sub-basin delineations.
Our model was successful in predicting the % palm swamp composition (R2 = 0.75) for sub-basins on the right bank of the Congo river. Swamp regions on the left bank of the Congo river, in the Democratic Republic of Congo (DRC), did not show similar significant relationships with the climatological data, indicating that the two sides of the Congo river experience different hydrological regimes.
We found that seasonal rainfall accumulations and their distribution contribute significantly to regional differences in palm and hardwood swamp presence in the Republic of Congo (RoC) and northern DRC regions of the Cuvette Centrale. Dry season rainfall and temperature were found to play a significant role, with smaller contributions from wet season rainfall totals and temperature (p-value < 0.05). We conclude that the prevalence of palm swamp is dependent on spatial and seasonal distribution of net water input, and that there exists an optimal range of rainfall totals over which palm swamp has a competitive advantage, above and below which hardwood swamp dominates. Elevation has the greatest influence on swamp type prevalence across the Cuvette Centrale, indicating that geomorphology or/and additional water input from run-off may play a role.
Woody cover is a key factor in understanding biosphere-atmosphere exchange processes including evapotranspiration. Global woody and forest cover maps based on remote sensing are available in a variety of quality levels. However, the existing global products frequently underestimate the presence of woody cover due to the limited woody cover and the effect of soil background in arid and semi-arid vegetation. In some case studies, a combination of acquired field data with medium-to-high resolution, freely available satellite data has been proven to provide woody cover estimates with practically sufficient accuracies. On the other hand, most studies published so far have concentrated on relatively limited areas and relied on expensive field data. In this study, we demonstrate a method that uses a combination of very high resolution (VHR) images from Google Earth® and Bing® as well as free multispectral Sentinel-2 data to provide a reliable woody cover product over the semi-arid Zagros Mountains area that are stretched over 0.5 million km2.
Using data from numerous years, we prepared homogenous Sentinel-2 mosaics. These data are integrated with reference woody cover values generated from Google® and Bing® VHR images using a semi-automated procedure. Then random forest (RF) models were trained and verified at different spatial resolutions/grains using repeated splits of the reference data into training and validation sets. Given the trade-off between model performance and spatial detail, the model with 40 m spatial grain returned the best results, demonstrating stable relationships between reference data and Sentinel-2-based estimations of woody cover density. The model reaches median coefficient of determination (R2) and root mean square error (RMSE) values of 0.67 and 0.11, respectively.
We also trained an RF model to classify the entire Zagros Mountains into seven land cover classes, including agriculture, built-up, forest, plantation, bare soil, water, and rangeland, in order to mask out irrelevant land cover classes. The overall accuracy and median kappa values of the land cover classification model were both more than 0.94 and 0.93, respectively.
Our work-flow that was implemented within Google Earth Engine can be applied to other arid and semi-arid regions, and might help to enhance the present global woody cover products, which typically perform poorly in semi-arid and arid environments. Comparisons of our woody cover products to common worldwide woody or forest-cover products showed obvious superiority of our methodology for semi-arid vegetation. These findings might be enhanced in future research by taking regional variations in the drivers of woody cover patterns along the Zagros area's environmental gradient into account.
Key Words: Arid and semi-arid woodlands, Zagros, Fractional Woody Cover, Sentinel-2, VHR imagery, global maps, land cover.
This study quantifies the drought induced forest decline in the beech dominated Hainich national park area between summer 2018, summer 2019 and summer 2020. The Hainich region in Thueringia is the largest deciduous forest in Germany dominated by Fagus sylvatica (beech) and Fraximus excelsior (ash). The area is since 1997 a protected forest ecosystem and received UNESCO “World Natural Heritage” status in 2011. The spring and summer months of 2018 and 2019 were exceptional dry. Very low precipitation rates were likely indirectly linked to the Arctic Amplification effect on the mid-latitude summer circulation. In summer 2019 some regions of the Hainich national park but also other regions in Germany with deciduous forest showed an exceptional degradation with area mortality of 30-40% on some test sites. For a quantification of the overall change and for understanding of spatial correlations with site specific environmental conditions (soil types, terrain slope) spatial high resolution Color Infrared (CIR) flight campaigns and UAV flights with Real Time Kinematik (RTK) georeferenced multispectral datasets were performed. The beech tree forest decline was investigated 1. by applying an Indvidual Tree Crown Delineation (ITCD) concept, 2. comparing relative calibrated multitemporal NDVI (Normalized Difference Vegetation Index) ratio values per tree crown and 3. detecting partial defoliation of tree crowns using a supervised Convolutional Neural Network (CNN) approach based on the TensorFlow implementation. Although the approach was complicated by illumination condition changes between different acquisition dates and sparse reference data – the results show that especially in the western hilly regions of the Hainich national park area severe canopy defoliation and increased beech tree mortality in summer 2019 is spatially correlated with specific dry soil conditions in combination with steeper slope values. The findings are in agreement in general with developments in other regions in Germany where dry soil conditions and/or specific terrain slope/aspect combinations play a critical role. UAV data from Phantom 4 RTK copter flights in low altitude configurations were used to refine defoliation reference data on a grid plot system for CNN model training. Sentinel-2 satellite data analysis based on multitemporal Disease Water Stress Index (DWSI) difference classifications overall confirmed the identified trend in the national park region. In the central park region (322 ha) we identified 150 ha as unchanged, 91 ha with moderate defoliation and 42 ha with strong defoliation. Area statistics from GIS analysis with slope/aspect and soil type overlay for the full park region of 7513 ha indicates that strong defoliation changes appear mainly on specific sites with either steep slopes and/or dry soil characteristics in the central park area.
The timely forest monitoring with easily comprehensible data analysis is growing increasingly urgent. Since 2015, the world's tropical forests can be observed regularly at an unprecedented 6/12-day interval with the satellites of the Sentinel-1 mission. Millions of gigabytes of C-band synthetic aperture radar (SAR) scenes are acquired day and night, regardless of cloud cover, haze, smoke or aerosols, potentially allowing deforestation and forest degradation to be monitored at least biweekly. The challenge, however, lies in finding adequate methods to extract meaningful indicators of forest loss from the vast amount of incoming SAR data, such that anomalies in the time-series can be regularly and consistently detected across tropical forests. Such forest-monitoring methods should be transparent and easily understandable to the wider public, hence enabling confidence in their use across various public and private sectors.
This study presents a simple space-time data cube design, where statistical information relevant to identify deforestation is extracted at each point in the SAR backscatter time-series. The cubes do not rely on decades of historical data, and the effects of seasonality and rainfall can be inherently masked out through their design. The methodology is as such - first, multi-temporal mosaics are generated for the area of interest. These mosaics can be generated at any temporal interval, split into individual orbits or orbit directions (i.e. ascending or descending satellite paths), and split into tiles (e.g. 100 km x 100 km tiles) covering the spatial extent desired by the user. Second, the mosaics are stacked one on top of the other and an analysis is run on a moving temporal window of a user-chosen size for each pixel (e.g. 10 past and 10 future images are compared for each pixel). The analysis extracts parameters such as (1) change in pixel values, (2) standard deviation of the values, (3) significance value of the change (e.g. p-value of a T-test), (4) slope and r2 of a linear fit, and so on. Using the absolute pixel values, potential flooded areas are also identified for each time-point. The output stack of information of each pixel at each time-point is referred to as the 'cube of change'. Third, and finally, the cubes of change are inputted into either simple thresholding, decision-tree, random-forest or deep learning approaches to detect forest loss at every point in the time-series.
As an example, the developed methods were first implemented using the CreoDIAS cloud-computing platform (https://creodias.eu/) in three pilot study sites (Manuas, Madre de Dios and Mato Grosso), with plans to be transferred to the OpenEO platform (https://openeo.cloud/) for an up-scale to the extent of the Amazon basin. Billions of pixels from the Sentinel-1 satellites from 2015 to date, each representing a 20 x 20 m of forest, are harmonized under the 'cubes of change' design, and a simple thresholding and approach to detect forest loss is demonstrated. Finally, the forest products are validated using high-resolution optical mosaics of the tropics available through Norway’s International Climate & Forests Initiative (NICFI) with Planet (https://www.planet.com/nicfi/). Approximately 3000 forest loss “events” spread across the pilot sites were visually identified in the optical mosaics and compared to our forest loss outputs. Preliminary results for the pilot sites show that approximately 90% of the events were detected, covering approximately 58% of their area. The false positive rate was low at an average of 0.20%, and forest loss events were detected usually within a month of occurrence. Mato Grosso, which experienced relatively large loss events (>15 ha), showed the worst results amongst the pilot sites. Finally, the analysis will be concluded (by the time of the Living Planet Symposium 2022) with an assessment of forest carbon loss in the identified forest loss areas using model-based inference. This analysis is to be aided with 501 randomly selected airborne lidar transects of 12.5 km in length each acquired within the Brazilian Amazon (Tejada et al., 2019), collected in 2016-2018, and supported with 224 ground plots within the transects (due to the lack of a probabilistic sample of field data field or lidar). The observations will be used to train a linking AGB model, and an analysis of existing forest maps (i.e. global ESA CCI Biomass 2017 AGB map and the global NASA/JPL 2015 AGB map) will be conducted.
References
Tejada, G., Görgens, E.B., Espírito-Santo, F.D.B. et al. Evaluating spatial coverage of data on the aboveground biomass in undisturbed forests in the Brazilian Amazon. Carbon Balance Management 14, 11 (2019). https://doi.org/10.1186/s13021-019-0126-8
Spatially explicit information about forest cover types and their spatial distribution is fundamental for operational forest management and forest monitoring. Spatially high-resolution Open EO data (i.e., Sentinel-2, ≤ 10 m) is able to cover much of the information needs through their spatial and spectral capabilities, their revisit frequency and easy access. While forests are a permanent subject of many land cover products – i.e., Global Land Cover (Copernicus Global Land Service, 100 m spatial resolution), CORINE Land Cover (European Environment Agency (EEA), 25 ha MMU), Urban Atlas (EEA, 1 ha MMU), High Resolution Forest Layers (EEA, 10 m spatial resolution) – and are also represented by the volunteered-geographic information of the OpenStreetMap, the combination of spatially explicit and thematic information is not readily available in a MMU fit for local forest management and monitoring (< 0.1 ha) at present. In case more thematic information is given beyond the pure forest class, the land cover products usually include broadleaved, needleleaved and mixed forest classes. Due to the given MMU, mixed forests can consist of different spatial configurations of forest leaf types. A mixture of leaf types can be regarded as mixed forests even though they are comprised of leaf types with recognizable borders. A truly mixed forest consists of different leaf types in a complex spatial mixture which can not easily be separated visually except for a few clustered trees. The former does also not inherit the same ecological functions and ecosystem services provided by a truly mixed forest where leaf types can hardly be separated. These mixed forests therefore comprise an important information source for forest leaf type maps. The presented GEOBIA framework has been developed in regard to this information gap. With a priority on open data and free and open-source software, the framework uses very-high resolution CIR imagery (2 m spatial resolution) to derive spatially-explicit forest units with a MMU of 0.05 ha. These forest units are used as the basis for different methodological pathways. The forest units can be used in object-based classification to derive needleleaved, broadleaved and non-forest classes. Classification based on aerial imagery and derived texture measures has shown the accuracy highly depends on the imagery’s acquisition time and is subject to negative effects of larger gaps in the canopy and of cast shadows. State-organized flight campaigns acquire very-high resolution aerial imagery at high costs and in a yearly cycle – at most – which hinders more frequent and reliable updates to the classification. Additionally, as the imagery from these extensive campaigns is often designed as a multi-purpose product, the data might not be captured at a time where spectral differences of forest resources are highlighted. In this study, we will present the development, implementation and evaluation of the GEOBIA framework using multi-temporal Sentinel-2 data to stratify forests (needleleaved, broadleaved, non-forest) on forest sites in the Greater Region. Classification on forest sites in Luxembourg has shown overall accuracies of 89 % can be achieved with multi-temporal Sentinel-2 data on the forest units. With a focus on transferability, the framework is assessed in the context of the cross-border cooperation between five different regions in four countries (Germany, France, Luxembourg, Belgium). Apart from the pathway towards object-based classification, additional information layers (i.e., forest disturbance) can be derived from the measures and classifications attached to the forest units, thus providing up-to-date information on the state of the observed forest areas. These information layers can be updated, extended and recalculated and are thus flexible to derive the needed information on command without the creation of redundant data. The GEOBIA framework is thus able to combine the information need for spatially explicit forest units derived from VHR imagery with the temporal and spectral information delivered by the Sentinel-2 mission for object-based classification and holds the potential for additional object-based methodology pathways.
More than half of the remaining tropical forest area is managed for timber production (Brandt, Nolte, & Agrawal, 2016). To keep a check on the environmental costs of unsustainable logging, such as biodiversity loss and carbon emissions, monitoring of logging activities is required. Novel remote sensing based data streams enable monitoring of small-scale forest disturbances across large areas in near-real time. For example, RAdar for Detecting Deforestation (RADD) alerts provide weekly forest disturbance information at 10 m spatial resolution based on cloud-penetrating Sentinel-1 imagery (Reiche et al.). The RADD alerts cover the humid tropical forest of South America, Africa, insular Southeast Asia, and the Pacific (Reiche et al., 2021). The optical GLAD-S2 alerts cover the Amazon in a spatial resolution of 10 meters, and are based on Sentinel-2 imagery with a revisit time of 5 days (the speed of disturbance detection may be impacted cloud cover) (Weisse, Berger, Webb, & Pickens, 2021). This offers potential for monitoring logging activities in forestry concessions.
A number of local studies have assessed the use of satellite-derived products to monitor selective logging (Antropov, Rauste, Praks, Seifert, & Häme, 2021; M. G. Hethcoat et al., 2020; Matthew G. Hethcoat, Carreiras, Edwards, Bryant, & Quegan, 2021). However, the relationship between disturbances detected through satellite alerts and logging intensity has not been quantified consistently at a large scale, and across different geographies. It is expected that low intensity logging is less well detected than higher intensity logging (Figure 1). Detectability of (selective) logging may differ across geographies due to differences in vegetation and logging practices.
We quantify the relationship between forest disturbance alerts and visually identified canopy disturbance based on Planet Labs data across three different geographies (Amazon, Congo Basin, and West-Papua). Planet Labs provides pre-processed monthly mosaics that are now openly available for the first time (NICFI, 2020). The very high spatial and temporal resolution of this imagery provides an opportunity to visually identify canopy disturbances that could not be seen before due to a lack of spatial detail or rapid regrowth after disturbance in the tropics (Asner, Keller, Pereira, Zweede, & Silva, 2004; Souza et al., 2013; Verhegghen, Eva, & Achard, 2015). This allows for an improved approximation of logging intensity compared to satellite-based alerts. Quantification of the relationship between canopy disturbance intensity and disturbance intensity detected by e.g. RADD or GLAD-S2 will then provide an improved estimate as to what extent logging intensities can be derived from these forest disturbance alerts.
Forestry concessions across the Amazon, Congo Basin, and West-Papua are sampled for this purpose. A heat map of disturbance alerts allows for stratification based on disturbance intensities. About 100 250x250m stratified probability samples are be selected for each geography. Canopy disturbance intensity in each sample area will be analyzed a using 10x10 meter grid. Each grid cell is be labeled as disturbed or undisturbed based on visual interpretation of maximum canopy disturbance in Planet Labs data over a 12-month period, encompassing the period in which disturbance was detected by the alert systems for the larger 1x1 grid. The disturbance intensities detected by RADD (Amazon, Congo Basin, West-Papua) and GLAD-S2 (Amazon) are then related to the canopy disturbance intensities, aggregated per tile. The spatial unit at which the correlation is optimal provides insight regarding the scale at which we can best estimate logging intensities. This will increase an understanding of the usability of forest disturbance alerts for monitoring of logging activities in forestry concessions. First results of this analysis will be presented.
References
Antropov, O., Rauste, Y., Praks, J., Seifert, F. M., & Häme, T. (2021). Mapping Forest Disturbance Due to Selective Logging in the Congo Basin with RADARSAT-2 Time Series. Remote Sensing, 13(4), 740. https://doi.org/10.3390/rs13040740
Asner, G. P., Keller, M., Pereira, R., Zweede, J. C., & Silva, J. N. M. (2004). Canopy damage and recovery after selective logging in Amazonia: Field and satellite studies. Ecological Applications, 14(4 SUPPL.), S280–S298. https://doi.org/10.2/JQUERY.MIN.JS
Brandt, J. S., Nolte, C., & Agrawal, A. (2016). Deforestation and timber production in Congo after implementation of sustainable forest management policy. Land Use Policy, 52, 15–22. https://doi.org/10.1016/J.LANDUSEPOL.2015.11.028
Hethcoat, M. G., Carreiras, J. M. B., Edwards, D. P., Bryant, R. G., Peres, C. A., & Quegan, S. (2020). Mapping pervasive selective logging in the south-west Brazilian Amazon 2000–2019. Environmental Research Letters, 15(9), 094057. https://doi.org/10.1088/1748-9326/ABA3A4
Hethcoat, Matthew G., Carreiras, J. M. B., Edwards, D. P., Bryant, R. G., & Quegan, S. (2021). Detecting tropical selective logging with C-band SAR data may require a time series approach. Remote Sensing of Environment, 259, 112411. https://doi.org/10.1016/j.rse.2021.112411
NICFI. (2020). New satellite images to allow anyone, anywhere, to monitor tropical deforestation. Retrieved November 17, 2021, from https://www.nicfi.no/current/new-satellite-images-to-allow-anyone-anywhere-to-monitor-tropical-deforestation/
Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., … Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2), 024005. https://doi.org/10.1088/1748-9326/ABD0A8
Souza, C. M., Siqueira, J. V., Sales, M. H., Fonseca, A. V., Ribeiro, J. G., Numata, I., … Barlow, J. (2013). Ten-Year Landsat Classification of Deforestation and Forest Degradation in the Brazilian Amazon. Remote Sensing 2013, Vol. 5, Pages 5493-5513, 5(11), 5493–5513. https://doi.org/10.3390/RS5115493
Swarbrick, N. (2007, September 24). Logging native forests. Conflicting views. Retrieved November 19, 2021, from https://teara.govt.nz/en/photograph/12762/selective-logging
Verhegghen, A., Eva, H., & Achard, F. (2015). Assessing forest degradation from selective logging using time series of fine spatial resolution imagery in Republic of Congo. International Geoscience and Remote Sensing Symposium (IGARSS), 2015-November, 2044–2047. https://doi.org/10.1109/IGARSS.2015.7326202
Weisse, M., Berger, A., Webb, J., & Pickens, A. (2021, May 12). Higher Resolution Alerts Offer More Detailed Picture of Forest Loss | Global Forest Watch Blog. Retrieved November 16, 2021, from https://www.globalforestwatch.org/blog/data-and-research/glad-s2-offers-high-resolution-deforestation-alerts/
Precise and timely information about the extent and direct drivers of forest loss informs land management and helps shape environmental policies at various scales from local to global. At the international level, the proportion of land that is forested, and the proportion of land that is degraded over the total land area are among the key indicators of achieving the United Nations Sustainable Development Goals (United Nations 2015). Countries are required to report their emissions from forest change as a part of the United Nations Framework Convention on Climate Change reporting (UNFCCC 2014). The availability of reliable geospatial information on forest change is also one of the key factors for successful corporate zero deforestation commitments (Garrett et al. 2019) and national deforestation moratoriums.
Global 30m resolution forest cover and change maps that became available in the past 10 years have revolutionized land cover monitoring, providing locally relevant data to communities, companies, and governments worldwide (Hansen et al. 2013). However, medium spatial resolution (~30m) global scale forest loss driver maps are still absent. Importantly, even if every single direct driver of forest loss and post-disturbance land cover were mapped annually at the resolution matching the global forest loss map, the quality of such mapping exercises would still require to be assessed. Probability sampling is a recommended good practice for estimating map accuracy and area of land cover and change classes with quantifiable uncertainty (GFOI 2016; Olofsson et al. 2014). While global medium spatial resolution forest loss maps are updated annually, sampling efforts aimed at map validation and area estimation have not yet been operationalized.
The presented research aims to establish a baseline for global high-resolution sample-based quantification of forest loss for the year 2018. Our goal is to estimate the area of forest loss and proportion of each loss driver globally, by continent and by climate domain. We utilized the Landsat-based global map of 2018 forest loss (Hansen et al 2013) to create sampling strata targeting the class of interest, and allocated 600 5×5 km equal-area blocks into the resulting strata. We then used a combination of high resolution data from PlanetScope (global daily coverage, 4m spatial resolution) and Sentinel-2 (5 day revisit frequency, 10m resolution) satellites to map forest loss extent for each block. We also attributed each mapped pixel of loss to a direct driver of forest loss (forestry and tree plantations, pasture, cropland, palm plantations, shifting cultivation, fire, settlements, roads, mining, selective logging, windfalls and hurricanes, insects) using 2019-2021 PlanetScope data as a reference for the areas where loss driver is not apparent immediately after forest clearing. This is work in progress, but preliminary mapping of 225 blocks out of 600 yields a global standard error of forest loss area estimate of 13.8%, and standard errors of major driver proportions (forestry, pasture, cropland, shifting cultivation, fire) under 40%. Regression estimator of global forest loss area with per block percent of 2018 forest loss from the global map (Hansen et al. 2013) as an auxiliary variable, yields the standard error of 10.3%. Continental and climate domain loss area estimates will be produced once more blocks are mapped. From the reference block maps we also expect to have insights into the accuracy of the global forest loss map (Hansen et al. 2013). This will include traditional accuracy metrics (user’s and producer’s accuracy of forest loss class), as well as the extent and distribution patterns of scale-dependent omission and commission errors in the global map (e.g. where it over- or underestimates forest loss area compared to the 4m resolution reference block maps). The block mapping protocol developed in the current study is planned to be used for regular (e.g. every 5 years) quality assessments of annually updated global forest change maps.
References
Garrett, R. D., S. Levy, K. M. Carlson, T. A. Gardner, J. Godar, J. Clapp, P. Dauvergne, R. Heilmayr, Y. le Polain de Waroux, B. Ayre, R. Barr, B. Døvre, H. K. Gibbs, S. Hall, S. Lake, J. C. Milder, L. L. Rausch, R. Rivero, X. Rueda, R. Sarsfield, B. Soares-Filho, and N. Villoria. 2019. “Criteria for Effective Zero-Deforestation Commitments.” Global Environmental Change 54:135–47.
GFOI. 2016. Integration of Remote-Sensing and Ground-Based Observations for Estimation of Emissions and Removals of Greenhouse Gases in Forests: Methods and Guidance from the Global Forest Observations Initiative. Edition 2.0. Food and Agriculture Organization, Rome.
Hansen, M. C., P. V. Potapov, R. Moore, M. Hancher, S. A. Turubanova, A. Tyukavina, D. Thau, S. V. Stehman, S. J. Goetz, T. R. Loveland, A. Kommareddy, A. Egorov, L. Chini, C. O. Justice, and J. R. G. Townshend. 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.” Science 342(6160):850–53.
Olofsson, Pontus, Giles M. Foody, Martin Herold, Stephen V. Stehman, Curtis E. Woodcock, and Michael A. Wulder. 2014. “Good Practices for Estimating Area and Assessing Accuracy of Land Change.” Remote Sensing of Environment 148:42–57.
UNFCCC. 2014. UNFCCC Reporting Guidelines on Annual Inventories.
United Nations. 2015. Transforming Our World: The 2030 Agenda for Sustainable Development.
Global forest ecosystems are one of the main contributors to climate change mitigation as they play a critical role in carbon sequestration [1]. Forests cover 31 percent of the global land area (4.06 billion hectares), contributing to the economic and social development of 1.3 billion individuals worldwide (about one-fifth of the global population) by providing services such as food and fuel supply, water and air purification, recreational or traditional use [2] . World population growth and associated socio-economic pressures have resulted in an alarming increase in natural forest transformation rates [3]. Deforestation and forest degradation processes are responsible for 12–20% of the global anthropogenic greenhouse gas (GHG) emissions in the 1990s and early 2000s [4,5]. Tropical and Subtropical forests contribute the most to the global carbon cycles, accounting for 78% of gross emissions (6.3 ± 2.4 GtCO2e yr − 1) and 55% of gross removals (−8.6 ± 7.6 GtCO2e yr − 1) [6]. These facts highlight the importance of having updated forest monitoring systems that provide rapid and accurate information on forest disturbances dynamics. There are several international climate initiatives that aim to reduce the current global deforestation trend. The United Nations’ Reducing Emissions from Deforestation and Degradation-plus (UN-REDD+) [7] is recognised as the main international policy aiming to reduce CO2 emissions from deforestation in tropical countries. Detecting deforestation activities using traditional methods, such as human patrol surveillance can be very expensive and ineffective, especially in highly dense tropical forest areas. Hence, deforestation initiatives mainly rely on the use of Remote Sensing (RS) technologies for the development of multi-scale forest inventories.
Optical remote sensing is commonly used for forest monitoring applications, however, aspects such as the recurrent cloud coverage (e.g., Tropical areas) and/or the inability to obtain information from sub-canopy forest strata considerably limit the data acquisition in a rapid and continuous manner [8]. Satellite-based high-resolution Synthetic Aperture Radar (SAR) sensors provide a solution to these limitations. Since its launch in 2014, ESA-Sentinel-1 SAR mission rapidly stood out as a valuable resource for effective monitoring of highly dynamic forest ecosystems due to its high level of performance in terms of data acquisition frequency (6- or 12-day revisit period), dual polarisation, global spatial coverage, open access policies and long operational lifespan [9,10].
Guyana has been one of the critical test countries for REDD+ programme, as it has one of the largest intact old-growth tropical rainforest of the world, with an estimated national forest cover of 88% (18.9 million ha, year 2000) [11]. Recent studies have shown a significant increase in Guyana’s deforestation rates in the past decade from 0.056% during the Norway–Guyana REDD+ program (2010 to 2015) to more than double 0.122% over the 2 year after the end of REDD+ program (2016 to 2017) [12]. This evidenced increase in the deforestation rates at national level demonstrates the need for developing new forest monitoring systems that allow a more detailed assessment of the current forest degradation activities at lower spatial scales. In this context, this research presents a novel SAR-based approach for monitoring the current deforestation and forest degradation dynamics of Tropical forests in a rapid and continuous way.
In this work, we present an optimisation of our recently developed CUSUM-SAR [8] based on cumulative sums, adapting the original method to a Tropical context while enhancing the detection capabilities by the modification of the change detection principle to a fully-unsupervised and automatic version. The new Constant False Alarm Rate – Kernel Cumulative sum (CFAR-KerCuSum) approach combines the use of dense Sentinel-1 GRDH image time series, with the exploration of different cumulative sum strategies for the detection of radar signal variation derived from forest changes. The new thresholding principle relies on the use of various Constant False Alarm Rates (CFAR) based on the generalised Gaussian distribution of the CuSum-SAR forest reference values. Our proposed methodology was used to assess the current deforestation trends at a regional level, analysing the annual forest changes and Above Ground Biomass loss of tropical rainforests of the North Rupununi (Guyana), exploiting both the computing and mapping capabilities and ancillary data archives of the Google Earth Engine platform. A validation analysis was performed to assess the CFAR-KerCuSum performance in detecting forest disturbances that occurred during 2018 and 2019. Forest change validation areas were manually digitised using photointerpretation of high-resolution multi-spectral PlanetScope optical imagery. Finally, to determine the effectiveness of this approach, the CFAR-KerCuSum was compared against other SAR-based methodologies previously used for forest monitoring: 1) Image Pairwise, 2) CUSUM-SAR, as well as the commonly used Landsat based 3) GLAD annual forest change map.
The results obtained from the multi-method comparison analysis shows that the proposed CFAR-KerCuSum provides the best performance, capable of achieving User Accuracies up to (UA=91%), Overall Accuracy values (OA=77.6%; Fs=74.4%) with low false alarm rates (FA< 5%). The co-polarised (VV) channel shows the greatest sensitivity to forest disturbance, providing OA and Fscore values that were on average (OA=1.20 points; Fs=1.89 points) over the results obtained for the VH channel. Results obtained for the annual forest change analysis evidence a significant increase in the deforestation rates in the North Rupununi’s tropical forests as result of the expansion of small-scale agriculture activities.
This work demonstrates the high capabilities of the proposed CFAR-KerCuSum for detecting natural and human forest disturbances in highly dynamic tropical forest areas. The adaptation and enhancement of the CUSUM-SAR algorithm to a simpler and more automatic version has resulted in a new EO forest monitoring tool with significantly greater exporting capability for its implementation in other tropical areas of the planet. This research has served to study both current deforestation trends and the main causes of forest changes in the North Rupununi region. The enhanced user-friendly and automatic characteristics of this new approach, which does not require any supervision from the user, make it a relevant forest monitoring tool for the study of tropical forest conservation.
Acknowledgement: We acknowledge the support of the North Rupununi District Development Board (NRDDB) ground teams, the Cobra Collective Initiative and Planet in the forest change validation analyses.
References
[1] L. J. R. Nunes, C. I. R. Meireles, C. J. P. Gomes, and N. M. C. A. Ribeiro, “Forest contribution to climate change mitigation: Management oriented to carbon capture and storage,” Climate, vol. 8, no. 2. MDPI AG, p. 21, Feb. 01, 2020, doi: 10.3390/cli8020021.
[2] FAO, “The State of the World’s Forests 2020,” FAO and UNEP, May 2020. doi: 10.4060/CA8642EN.
[3] R. S. Defries, “Why Forest Monitoring Matters for People and the Planet,” in Global Forest Monitoring from Earth Observation, I., F. Achard and M. Hansen, Eds. Boca Raton, FL. United States: CRC Press/Taylor & Francis, 2013, pp. 1–14.
[4] G. R. van der Werf et al., “CO2 emissions from forest loss,” Nat. Geosci. 2009 211, vol. 2, no. 11, pp. 737–738, Nov. 2009, doi: 10.1038/ngeo671.
[5] S. S. Saatchi et al., “Benchmark map of forest carbon stocks in tropical regions across three continents,” Proc. Natl. Acad. Sci., vol. 108, no. 24, pp. 9899–9904, Jun. 2011, doi: 10.1073/PNAS.1019576108.
[6] N. L. Harris et al., “Global maps of twenty-first century forest carbon fluxes,” Nat. Clim. Chang., vol. 11, no. 3, pp. 234–240, Mar. 2021, doi: 10.1038/S41558-020-00976-6.
[7] UN-REDD, “Fact Sheet: About REDD+,” 2016. Accessed: Nov. 19, 2021. [Online]. Available: https://www.unredd.net/documents/redd-papers-and-publications-90/un-redd-publications-1191/fact-sheets/15279-fact-sheet-about-redd.html?path=redd-papers-and-publications-90/un-redd-publications-1191/fact-sheets.
[8] J. Ruiz-Ramos, A. Marino, C. Boardman, and J. Suarez, “Continuous forest monitoring using cumulative sums of sentinel-1 timeseries,” Remote Sens., vol. 12, no. 18, 2020, doi: 10.3390/RS12183061.
[9] A. Hardy et al., “Automatic detection of open and vegetated water bodies using Sentinel 1 to map African malaria vector mosquito breeding habitats,” Remote Sens., vol. 11, no. 5, 2019, doi: 10.3390/rs11050593.
[10] M. A. Tanase et al., “Synthetic aperture radar sensitivity to forest changes: A simulation-based study for the Romanian forests,” Sci. Total Environ., vol. 689, pp. 1104–1114, Nov. 2019, doi: 10.1016/j.scitotenv.2019.06.494.
[11] GFC, “Guyana Forestry Commission Guyana REDD+ Monitoring Reporting & Verification System (MRVS),” 2018. Accessed: Oct. 13, 2021. [Online]. Available: http://www.lcds.gov.gy/images/stories/Documents/Joint Concept Note %28JCN%29 2012.pdf.
[12] A. Roopsind, B. Sohngen, and J. Brandt, “Evidence that a national REDD+ program reduces tree cover loss and carbon emissions in a high forest cover, low deforestation country,” Proc. Natl. Acad. Sci., vol. 116, no. 49, pp. 24492–24499, Dec. 2019, doi: 10.1073/PNAS.1904027116.
Assessing the development of wildfire scars during a period of consecutive active fires and smoke overcast is a challenge. The study was conducted during nine months when Israel experienced massive pyro-terrorism attacks of more than 1100 fires from the Gaza Strip. The current project strives at developing and using an advanced Earth observation approach for accurate post-fire spatial and temporal assessment shortly after the event ends while eliminating the influence of biomass burning smoke on the ground signal. For fulfilling this goal, the Aerosol-Free Vegetation Index (AFRI), which has a meaningful advantage in penetrating an opaque atmosphere influenced by biomass burning smoke, was used. On top of it, under clear sky conditions, the AFRI closely resembles the widely used Normalized Difference Vegetation Index (NDVI), and it retains the same level of index values under smoke. The relative differenced AFRI (RdAFRI) set of algorithms was implemented at the same procedure commonly used with the Relative differenced Normalized Burn Ratio (RdBRN). The algorithm was applied to 24 Sentinel-2 Level-2A images throughout the study period. While validating with ground observations, the RdAFRI-based algorithms produced an overall accuracy of 90%. Furthermore, the RdAFRI maps were smoother than the equivalent RdNBR, with noise levels two orders of magnitude lower than the latter. Consequently, applying the RdAFRI, it is possible to distinguish among four severity categories. However, due to different cloud cover on the two consecutive dates, an automatic determination of a threshold level was not possible. Therefore, two threshold levels were considered through visual inspection and manually assigned to each imaging date. The novel procedure enables calculating the spatio-temporal dynamics of the fire scars along with the statistics of the burned vegetation species within the study area.
Humid tropical forests are one of Earth’s most crucial biomes for biodiversity and play an important role in the global carbon cycle [1]. Despite ongoing protecting measures large-scale natural and man-made forest disturbances (e.g. fires & logging activities) continue to cause immense carbon emissions in these areas [2]. Amounts of released carbon emissions vary based on the type and intensity of forest disturbance [3]. It is therefore crucial to in-depth characterize key forest disturbances, e.g. fires and logging activities, to improve carbon emission estimations and to assist policy makers on a large-scale. Remote sensing data has proven its suitability for large-scale forest monitoring in various studies [4]. However, historically an in-depth characterization of forest disturbances was hampered due to a lack of available temporal dense and multi-sensor satellite data [5]. Newly available dense Sentinel-1 and Sentinel-2 time series imagery allow for the first time to combine both radar and optical sensor specific capabilities on detailed temporal scale [6]. Therefore we utilize optical and radar dense time series to characterize fire-related forest disturbances [7] and study the benefit of textural radar features to detect and characterize various forest disturbance features (e.g. partially logged, soil moisture content, etc) [8].
For a more in-depth characterization of fire-related forest disturbances in the province of Riau (Indonesia) we separately mapped optical (Landsat-7, Landsat-8 and Sentinel-2) and radar (Sentinel-1) forest disturbances. We then combined the optical and radar forest disturbance maps with daily active fire alerts to classify the temporal relationship (predating, coinciding, postdating) between forest disturbances and active fire alerts. This resulted in seven archetypes of fire-related forest disturbances which reflect varying magnitudes of forest disturbances. These magnitudes are detected due to sensor-specific sensitives of optical (e.g., changes in tree foliage) and radar (e.g., changes in tree structure) data predating, coinciding or postdating fires. Archetypes may indicate burn severities and can be associated with specific land management practices, such as slash-and-burn agriculture and salvage logging. Furthermore, results suggest that a delayed or omitted forest disturbance detection in either the optical or radar signal may be related to their different sensitives to detect changes in tree foliage or structure rather than environmental influences (e.g. cloud coverage, soil moisture, etc.).
However, post-disturbance high soil moisture properties and remaining tree structures may cause stable radar backscatter values resulting in omission errors based on radar signals alone [5]. This leads to erroneous multi-sensor characterization of forest disturbances when combined with optical forest disturbance maps. To further exclude these uncertainties, we study the potential of Gray Level Co-occurrence Matrix (GLCM) textural measures of Sentinel-1 time series for different forest disturbance features (e.g. soil moisture, remaining structure, etc.). Preliminary results show that GLCM textural measures, i.e. Entropy and Angular Second Moment, are sensitive to these otherwise missed forest disturbances. To investigate the suitability of radar based GLCM measures in a near real time environment we calculate separability measures (e.g. Jeffries-Matusita distance) between stable and disturbed forest for each time step of both backscatter and GLCM time series. The separability measures allow to compare to the potential to detect the forest disturbance features (i) and the timing of a forest disturbance detection (ii) based on radar backscatter vales and radar textural measures respectively.
References
1. Lynch, J.; Maslin, M.; Balzter, H.; Sweeting, M. Sustainability: Choose satellites to monitor deforestation. Nature 2013, 496, 293–294, doi:10.1038/496293a.
2. Van Der Werf, G.R.; Randerson, J.T.; Giglio, L.; Van Leeuwen, T.T.; Chen, Y.; Rogers, B.M.; Mu, M.; Van Marle, M.J.E.; Morton, D.C.; Collatz, G.J.; et al. Global fire emissions estimates during 1997-2016. Earth Syst. Sci. Data 2017, 9, 697–720, doi:10.5194/essd-9-697-2017.
3. Bär, A.; Michaletz, S.T.; Mayr, S. Fire effects on tree physiology. New Phytol. 2019, 223, 1728–1741, doi:10.1111/nph.15871.
4. De Sy, V.; Herold, M.; Achard, F.; Asner, G.P.; Held, A.; Kellndorfer, J.; Verbesselt, J. Synergies of multiple remote sensing data sources for REDD+ monitoring. Curr. Opin. Environ. Sustain. 2012, 4, 696–706, doi:10.1016/j.cosust.2012.09.013.
5. Watanabe, M.; Koyama, C.N.; Hayashi, M.; Nagatani, I.; Tadono, T.; Shimada, M. Refined algorithm for forest early warning system with ALOS-2/PALSAR-2 ScanSAR data in tropical forest regions. Remote Sens. Environ. 2021, 265, 112643, doi:10.1016/j.rse.2021.112643.
6. Reiche, J.; Mullissa, A.; Slagter, B.; Gou, Y.; Tsendbazar, N.-E.; Odongo-Braun, C.; Vollrath, A.; Weisse, M.J.; Stolle, F.; Pickens, A.; et al. Forest disturbance alerts for the Congo Basin using Sentinel-1. Environ. Res. Lett. 2021, 16, 024005, doi:10.1088/1748-9326/abd0a8.
7. Balling, J.; Verbesselt, J.; De Sy, V.; Herold, M.; Reiche, J. Exploring Archetypes of Tropical Fire-Related Forest Disturbances Based on Dense Optical and Radar Satellite Data and Active Fire Alerts. Forests 2021, 12, 456, doi:10.3390/f12040456.
8. Balling, J.; Verbesselt, J.; De Sy, V.; Herold, M.; Reiche, J. Investigating time series of textural features for an improved radar forest monitoring on a pixel-level utilizing Sentinel-1 data. In prep.
Accurate detection of forest disturbance in space and time is essential for understanding forest dynamics thus assist in developing strategies for sustainable forest management and climate change mitigation. Landsat imagery was widely used for forest disturbance monitoring. However, the low temporal resolution of Landsat imagery hinders timely detection of forest disturbance. Harmonized Landsat and Sentinel-2 imagery has the advantage of providing more frequent satellite observations, but its added-value for tropical forest disturbance has not been examined yet. To fill this gap, all available HLS imagery acquired from 2016 to 2019 were used to monitor forest disturbance in two tropical forest sites in Tanzania and Brazil. BFAST Monitor and random forest algorithm were implemented to detect forest disturbance based on the normalized difference moisture index (NDMI) and normalized difference vegetation index (NDVI) time series. 1200 pixels for each site were selected to assess the accuracy of the detected forest disturbance. At the Tanzania site, the results showed that using the combined Landsat-8/OLI and Sentinel-2 data achieved the highest overall accuracy (84.5%), more than 3.5% higher than those of using only Landsat-8/OLI or Sentinel-2. Similarly, the overall accuracy of using the combined Landsat-8/OLI and Sentinel-2 data was 95.5%, at least 2% higher than others in the Brazil site. In terms of temporal accuracy, the mean time lag of 2.0 months, was achieved from the combined data and Sentinel-2 only at the Tanzania site. This mean time lag is at least one month shorter than that of using Landsat-8/OLI only (3.3 months). At the Brazil site, the mean time lag of forest disturbance detection based on the combined data was 0.22 months, shorter by 0.50 and 0.15 months when compared to using Landsat-8/OLI only (0.72 months) or Sentinel-2 only (0.37 months), respectively. Our results indicate that HLS data has the potential of improving forest disturbance detection in space and time, which is promising for accurate and timely forest disturbance detection, particularly in the moist forest. Combining imagery acquired from Landsat and Sentinel-2 for forest disturbance monitoring will make temporally dense observations available and benefit the disturbance monitoring system, especially in identifying the time when the forest disturbance occurs. Early detection of the forest disturbance will enable rapid response on preventing further loss of forests. With more dense data acquired from the Landsat 9, Sentinel-1, PlanetScope, more frequent observations are available, the accuracy of forest disturbance detection is expected to improve further.
Clear-cut logging identification and mapping, via remote sensing data, provides valuable information for forest management, especially when done regularly (weekly or monthly). Spaceborne synthetic aperture radar (SAR) offers data all year, day and night under all weather conditions, being an added value for that purpose. Man-made deforestation, through clear-cutting practices, is characterized in SAR images by a regular geometric pattern with reduced backscatter intensity and increased interferometric coherence magnitude. In this paper, a new approach based on the temporal analysis of the coherence and its correlation with the behavior of clear-cuts is proposed. The interferometric coherence temporal behavior is modeled with a logistic function whose lower and upper bounds are set based on the representative coherence values for forest stands and clear-cut areas, respectively. Each pixel is classified as clear-cut whenever the logistic function has a positive rate and the normalized difference between the upper and lower bounds is greater than a given threshold (e.g. 0.3). The approach also enables the identification of the date of each clear-cut event, corresponding to an inflection point of the logistic curve that best fits the coherence time series. To avoid clear-cuts detection outside forest areas, results were masked out using a forest mask retrieved from the Land Use and Land Cover (LULC) maps produced by the Directorate General for Territory (DGT). A time series of 23 Sentinel-1 SAR interferometric wide images acquired between May, 20th and October, 6th 2020 was used to assess the feasibility of the proposed approach for forest clear-cuts mapping for a test area in the northwest coast of Portugal. For a span of only four months, several small-sized (100 to 500 m2) areas corresponding to forest thinning, or even other forest practices, 222 clear-cut areas greater than 0.1 ha, and 48 areas greater than 0.5 ha were identified. All these areas were validated by visual inspection of Sentinel-2 multispectral images using the Sentinel Hub Playground web platform.
ACKNOWLEDGMENTS
Fundação para a Ciência e Tecnologia (FCT) – project UID/GEO/50019/2020
Systematic monitoring of natural ecosystems is of crucial importance for assessing past and current conditions of vegetation dynamics, for early detection of disturbances, as well as for efficient management of forests in the context of climate change adaptation. Nowadays, satellite remote sensing datasets are being widely exploited for providing accurate spectral measurements of the Earth’s surface, including forest ecosystems, at various spatial and temporal scales. In addition, the constantly emerging new technologies and satellite systems combined with free data access policies implemented by leading space agencies have provided new perspectives to the scientific community and various end users for monitoring of forest dynamics. Indeed, advanced techniques for processing dense high-resolution satellite time series are being developed, with emerging cloud-based technologies supporting the establishment of new platforms and services.
The ARTEMIS project, funded by the Greek Secretariat for Research and Technology, delivers an Earth Observation based platform that provides high quality products and services for assessing forest condition and health in a Mediterranean region. Among the aims of this project is to support forest productivity and economic growth, especially in areas where chestnut production has dramatically declined in the past decades. This online platform has been designed to seamlessly process Copernicus Sentinel-2 imagery, but not limited to, in order to estimate a set of vegetation indices suitable for forest monitoring. The architecture of this platform incorporates a database, various modules that support automation of processing workflows, data storage, visualization and analysis. In addition, a Web Processing Service (WPS) supporting the execution of additional functionalities is also incorporated, via the OSGEO pyWPS module.
More specifically, the system can perform collection and preprocessing tasks of satellite images either locally or remotely using the Google Earth Engine (GEE) platform. Utilizing GEE provides very fast access to a huge amount of geospatial datasets and high computing capacity. Preprocessing includes also the extraction of specific vegetation indices, broadband and narrowband, which are related to both plant physiology characteristics and structural variations of forests. Time series of these indices are generated and used to assess current vegetation conditions as well as “anomalies” by estimating variations from the long-term average. Consequently, the user of the platform will be able to monitor and detect potential gradual or abrupt changes in the area of interest. Parallel tasks comprise the extraction of features for semi-automated classification of forest vegetation in the study area by incorporating data from multiple sources (aerial, terrestrial field measurements, existing spatial databases). In this context, state-of-the-art deep learning classification techniques, suitable for multi-temporal/multi-band data, such as Transformers or UNET, are used and customized.
The processed raster and vector data products and statistics are stored in cloud storage and can then be visualized via a user-friendly and modern web-GIS platform that can support multiple users, roles and devices. The visualization of the output products is implemented by the Leaflet JavaScript library, which retrieves content (e.g. maps. images or statistical data to be displayed in graphs) via a custom REST API. The envisaged operation of ARTEMIS platform involves two basic scenarios: the first refers to the involvement of individual users for monitoring the condition of a forest by interactively selecting a customized area of interest; the second one involves decision making and planning of interventions by relevant competent authorities, which will utilize the developed forest monitoring tools when required.
Amazon rainforest has been under constant pressure by anthropogenic and climate stressors such as logging, extreme droughts, and fires, which directly compromises future functioning of this globally important ecosystem. In the last 35 years, 13% of Amazon rainforest is deforested, and only 8% of that deforested land has been regrowing into secondary forest (MapBiomas 2021). Therefore, there is an urgent need to understand and monitor the condition and recovery of Amazon rainforest. In this work, we focused on one of the hotspots of the Amazon rainforest change: the state of Rondonia, in Brazil (35% deforestation and 2% regrowth). We studied the variation of satellite radar and lidar observations over secondary forest of different ages covering the last 33 years.
The radar analysis was based on pre-processed radiometrically terrain-corrected Sentinel-1 backscatter images. Different annual statistics were calculated from the backscatter time series and then stratified by 33 forest age classes. The study area was split into a 36x36 km grid that was used for spatial averaging of the backscatter statistics per forest age class. Both Sentinel-1 images and the forest age map had 30 m pixel spacing. The spatial median of the annual quantile difference statistic was found to correlate with forest age, increasing linearly at young forest ages and then saturating at the oldest forest ages. However, we also observed grid cells with a linear relation across all forest ages, i.e., tiles with no saturation. To describe these two functional relations, we used both exponential and linear regression models. In total, we found 44% and 11 % of all tiles that were modeled with the exponential- and linear- model, and that had the coefficient of determinations (R2) 0.6 or larger, respectively. The remaining grid cells had a smaller R2 and were predominantly covered with intact forest, i.e., with a low number of secondary forest pixels. The change of quantile difference over the 33-year period of forest age was between 0.3 dB and 1.8 dB for the exponentially modeled grid cells, and between 0.2 dB and 0.7 dB for the linearly modeled grid cells. It should be noted that this is a variability of spatially averaged (median) quantile difference, in which the signal noise is also reduced.
The lidar analysis was based on novel satellite lidar data acquired by GEDI and ICESat-2 missions. Different relative height metrics such as RH98 and RH100 were stratified by data acquisition and quality parameters as well as by 33 forest age classes. Furthermore, the relative height metrics were compared to airborne lidar forest heights, resulting in the root mean squared difference below 6 m for both missions and most of the data subsets. The medians of relative height values, calculated over the whole study area, correlated with forest age. As they saturated at the oldest forest ages, the median relative forest heights were modelled with an exponential function, achieving R2 of about 0.6 for both missions. The increase of GEDI and ICESat-2 relative heights over the 33-year period of forest age was 8.6 m and 7.8 m, respectively. The difference between these two values is within their standard errors.
The sensitivity on forest age observed for Sentinel-1 backscatter quantile difference as well as GEDI and ICESat-2 relative heights is encouraging to potentially monitor forest growth and assess resilience of tropical forest. Nevertheless, further research is necessary to understand their relation to other environmental factors affecting observed signals and growth and how their uncertainty propagates to specific applications.
Acknowledgements
This work was supported by the Netherlands eScience Center under file number ASDI.2018.068 (the RETURN project) and by the Dutch Research Council under the file number STW.15839 (the Big-EO-Analytics project).
References:
Project MapBiomas - Collection 6 of Brazilian Land Cover & Use Map Series, accessed on 23.11.2021 through the link: https://mapbiomas.org/
Forests are increasingly affected by degradation phenomena in Germany. The consequences of the prolonged drought of recent years and the additional weakening caused by pests and storm events induced larger areas of calamity in spruce stands but also deciduous forests. Remote sensing data are operationally being used for monitoring the status and dynamics of forest stands. Although changes in tree vitality and the extent of damaged areas are well detectable from space, there is currently a lack of understanding the role of forest site factors to interpret and explain detected changes. However, unravelling the relationship between remotely sensed change signals and site factors is of crucial importance for risk assessment and site-specific adaptation of future forest management.
In this study, we hypothesize that the changes detectable by optical remote sensing are related to topographic and forest site factors like plant-available soil water in Thuringian forest areas. Therefore, we derive topographic indices, such as slope and aspect, from a digital elevation model. Further, detailed soil information was extracted from the comprehensive soil and site mapping system of East Germany. Recently, this database has been enriched with high-resolution geoinformation of physical and hydrological soil parameters (e.g. available water capacity) by the means of digital soil mapping. Sentinel-2 data were collected (2016-2021) over the vegetation period for different study areas in the Thuringian Forest, Thuringian Slate Mountains and Southern Harz Mountains - all affected by drought in the last years. The data were downloaded from the ESA Sentinel Hub at Level 2, corresponding to bottom-of-atmosphere reflectance. We calculated common vegetation indices (e.g., NDVI, NDWI, NDREI) to retrieve the multitemporal vitality status of the forested area. Within the ESA software SNAP, we used the “Biophysical Processor” to retrieve Level 2B biophysical information (e.g. LAI, FCOVER). The Level-2A scene classification was used to limit influences of clouds and shadows. To relate observed patterns to different tree species, we apply a tree species map that originated from a random forest classification based on Sentinel-2 imagery (2019-2020).
We developed a multitemporal change detection procedure to prevent false positive change signals associated to rapid succession on recently disturbed or deforested areas. Therefore, time-series analyses were carried out at pixel-level. We applied an outlier correction based on thresholding and smoothing for low index values most likely associated to undetected clouds in the masking process. We then identified the highest absolute change in index values as well as the corresponding duration for each pixel. As result, we produced composite raster images, including the change in vitality or biophysical variable as well as the associated duration for the detected change. Results were then used as dependent variables in a regression analysis with forest site information.
Preliminary results show that both changes in vegetation indices and biophysical variables are highly sensitive to forest site factors. We argue that due to the strong change signal, the choice of vegetation index or biophysical variable becomes less important, even though species and site-specific variations in sensitivity occurred. However, it should be noted that in comparison to vegetation indices, Sentinel-2 derived biophysical parameters such as LAI and FCOVER provide quantitative information that can be better validated in the field. Particularly high explanatory potential is observed for the substrate and available water capacity. Our findings confirmed that the trees stocking on soils with low field capacity were mainly degraded due to the prolonged drought of the recent years. We detect considerable differences between deciduous and coniferous tree species which we mainly retrace to different drought sensitivity, rooting depths and adaptation strategies.
The initial conclusion can be drawn that the combination of Sentinel-2 change detection and forest site factors contributes to a better understanding of recent forest degradation. Our approach now offers the potential to analyze the spatio-temporal dynamics to clarify which spatial setting and combination of site factors was affected first and most. The integration of remote sensing with forest site data can be used to assess vulnerability to prolonged droughts, identify areas of risk and ultimately support decision making for successful forest restoration and adaptation.
In 2019, the European Commission published a communication highlighting the need to further protect forests. Priority activities included to reduce the use of commodities and derived products related to deforestation, whilst strengthening cooperation across different actors.
In the context of the Green Deal, the European Parliament urged to step up action to further protect forests. In October 2020, it called to initiate a binding legal framework. In November 2021; a first draft of the regulation was published on making available on the Union market as well as export from the Union of certain commodities and products associated with deforestation and forest degradation.
During COP26, more than 100 countries committed to reduce and reverse deforestation by 2030.
So what can we do to help governments, regional and international organisations reach their objectives? One of the first step is to monitor the forest cover change. Some food industries like Nestle and Ferrero are already doing this with Starling.
Starling is a digital service available on OneAtlas that has been created with the Earthworm Foundation. It kicked off in June 2017 in Indonesia and Malaysia, where the development of palm oil plantations was driving deforestation. The platform now covers 22 countries around the world, representing seven million square kilometres. It has been designed to help users identify changes in forest cover and make informed decisions about where and how to act.
Airbus uses its technology and industrial know-how to process satellite images –including Copernicus Sentinel-2- and distributes Starling worldwide, Earthworm provides on-the-ground expertise and demonstrates to different actors how they can make use of the platform to drive action.
If Starling was originally designed to help food industries reduce deforestation, it is now also a valuable and operational tool for governments, NGOs, international and regional organisations.
STARLING DESCRIPTION:
Starling is a complete information system dedicated to sustainable forest management. The solution is based on a full cloud infrastructure and a set of modular “bricks” to ensure a fluid data workflow. It combines:
• A range of data derived from satellite imagery (basemap, monitoring and imagery layer) with public and final user proprietary data such as supply chain descriptions.
• A set of models and predictive rules, defined in closes relationship with end users, allowing transforming this information into different levels of actionable analytics; in turn enabling to monitor, assess and act on environmental changes.
More particularly Starling offers
A basemap that identifies the main land features, in particular discriminating natural forest (productive or conservation) and planted forest. More specifically the basemap provides:
• The nature of the forest ecosystems, whereby the natural forest is either classified as a single class (today the case over tropical forests) or several classes (over temperate and boreal regions, natural forest is further split into Pine / Forest / Mixed classes).
• The capacity to step back in the past through a maximum 20 year time series of base maps that supports analysis of land cover evolution over time and is equally an essential information for Carbon accounting and to incentivise transition towards more sustainable agriculture
• Last but not least, the base maps are complemented by a SPOT6/7 1.5 meter resolution backdrop imagery layer, annually updated and providing additional insights on the land cover / land cover change as well as evidencing issues relevant to the user.
A scalable, permanent forest cover change monitoring tool to alert users in a timely manner.
• Adapted to monitor different types of forest disturbance, i.e. deforestation, degradation, forest conversion
• Ability to adjust reporting scales to meet user needs and fit with the land cover change dynamic.
• Designed to evidence industrial and smallholder-driven deforestation.
Analytics and indicators, which provide:
• Data upload functionality allowing user to incorporate business-specific data into the platform, e.g. boundaries such as cadastral information, no go areas or buffer zones, or supply chain data such as certified forest management units.
• Ability to obtain reports and exports of analytics derived from the cross-interpretation of Starling data with data overlays.
Web Portal, which provides:
• A user-friendly and intuitive platform to display maps & analytics, generate reports and to follow-up actions.
A robust scientific and technical methodology is at the heart of the platform.
Each image is computed by our internal tool (Overland) to derive from each pixel all biophysical parameters and variables which help to characterise cover or vegetation.
The advantage of the biophysical parameters is that all the sensors can be analyzed together with in a time series image stack based on our inhouse GeoCube technology.
The GeoCube, at the core of Starling, is designed to provide a storage infrastructure that is scalable, distributed, able to ingest pre-processed information from a variety of EO systems or external data generating components, and to deliver consolidated, fully aligned information.
The GeoCube solution is designed for the processing of additional heterogeneous data sources e.g aggregation of data from the BIOMASS mission and of LIDAR data, together with the already available optical and SAR data. Such a system will be capable to serve the scientific community as well as decision makers by giving access to a major playground where to develop and test their models on multiples themes such as Carbon stocks, Climate Change, Biodiversity assessments, etc.
Data upload features:
The digital solution developed has a data upload functionality allowing users to incorporate business-specific data into the platform, e.g. boundaries such as cadastral information, no go areas or buffer zones, or supply chain data such as certified forest management units as well as supply chain specifics:
- Supply chain objects (Suppliers lists, Mill lists, Sourcing boundaries list (concession plantation estates, smallholders, etc.)
- Connections between suppliers and mills, between boundaries and mills.
- Supply chain information can come from different sources: Data of public origin e.g. open data made available by Starling and accessible to all users via the portal, private data, which is user-informed and strictly confidential
To give a concrete example, the government of Ivory Coast used Starling to reduce the rate of deforestation, identify priority areas and follow action strategies in the country's Cavally Forest. Farmers were clearing land underneath the tree canopy to plant young cocoa plants that need shade. This is difficult to spot with traditional methods, but Starling has demonstrated the capacity to observe below-canopy vegetation using satellite radar data, making the tool ideal for monitoring cocoa-induced degradation and deforestation.
Overall Starling is the demonstration that a recently created solution can be quickly operational and re-used while reaching a wide variety of users and bringing everyone around the table. It also shows how satellites images, from various sources can be combined together with on-the-ground expertise to provide relevant, rapid and unbiased insight.
Over the last decades, forest fires have become an environmental problem with serious ecological, economic and social effects. In this sense, the aim of this study is to develop a methodology for the burned areas delimitation and, its subsequent fire severity assessment, in wildfires occurred in Spain between 2018 and 2020. As input data, this study is based on the use of Sentinel-2 spectral indices, which are characterized by having spectral bands in the near-infrared (NIR) and short-wave infrared (SWIR) spectral regions, allowing a high distinction between burned and unburned areas, and between different vegetation burn severity degrees too. All possible combinations between Sentinel-2 bands applied to a spectral normalized difference index (SPI) were analyzed, along with the most commonly used burn spectral indices in remote sensing as the Burned Area Index for Sentinel-2 (BAIS2), the Mid-Infrared Burn Index (MIRBI), the Normalized Burn Ratio (NBR), the Relativized Burn Ratio (RBR) and the relative differential Normalized Burn Ratio (RdNBR). In addition, in order to delete confusions between burned area and the presence of other land cover areas, the temporal differences between pre-fire and post-fire dates were obtained for each spectral index. The results obtained were compared by: in the case of burned areas, the Emergency Mapping Service (EMS) and the Galicia forest service; in the case of burn vegetation severity, using field points classified as in Ruiz-Gallardo et al. (2004) study (null, low, moderate and high severity). The final statistic results obtained, show that the RdNBR spectral index provide the best results of burned area delimitation (12% of commission error and 2 % omission error, respectively) whereas, the combination of the Normalized Difference Vegetation Index (NDVI) and the modified Normalized Burn Ratio (NBR2), used in areas with mixed and full vegetation respectively, provides the best results for assessing the severity of burn vegetation severity assessment (kappa statistic equal to 0.83).
Forest species classification is crucial for the sustainable management of forest ecosystems in terms of resource planning. Remote sensing technology has been widely used both as an alternative and in conjunction with field measurements in vegetation species classification. Discriminating spectrally similar plant species constitutes a challenge in many remote sensing environmental applications. In this scope, today’s abundance of satellite imagery can be employed for this purpose and provide detailed spectral information in large time frames. In this study, spectral index time series were used to detect specific time frames where the species differ consistently. More specifically, Sentinel-2 multispectral imagery time series were used within the Google Earth Engine cloud-based environment, for fir (Abies borisii-regis) and spruce (Pinus nigra) discrimination in Pertouli University Forest, Greece. Field measurements provided 31 samples for each of spruce and fir homogenous stands, and Sentinel-2 multi-temporal images (2017-2020) was employed for the calculation of spectral indices. The indices that were calculated were: Anthocyanin Reflectance Index (ARI1), Chlorophyll Red-Edge (CHLre), Modified Chlorophyll Absorption in Reflectance Index (MCARI), Modified Simple Ratio (MSI), Normalized Difference Infrared Index (NDII), Normalized Difference Vegetation Index (NDVI), and Pigment Specific Simple Ration (PSSR). Furthermore, several statistics (mean, standard deviation, median, maximum value) were calculated for each class, alongside with the difference between them, to use as thresholds for spruce and fir species discrimination. A visual comparison of the mean time series for each class revealed a period (mid-June) where fir and spruce differ consistently, especially when the MCARI index is considered. To evaluate those thresholds, 37 new points for each class were extracted by photointerpretation. Similar processing steps were executed (time series building, cloud mask application, outlier removal, spectral index calculation) and each of the statistics derived thresholds were applied. Finally, the accuracy for each statistic was calculated, and maximum index value outperformed mean, standard deviation, and median by achieving a 97.3% accuracy in spruce and fir discrimination.
Functional-Structural Plant (FSP) models are useful tools for understanding plant functioning and how plants react to changes in environmental factors. These models can give important insights into how the contribution of forests in the carbon cycle may change with a rapidly changing climate. However, the main constraints for building and using FSP models for forests are related to the large amounts of data needed on tree structure. An important component of FSP model development is thus to develop techniques to efficiently acquire 3D structures of trees. Light Detection and Ranging (LiDAR) could be an alternative non-destructive method to obtain structural information of trees for FSP modelling. However, this has largely been unexplored until now. We aim to investigate how LiDAR-derived tree traits can be used for tree FSP models by providing structural parameter inputs and metrics for validation.
A summary of tree parameters needed for FSP model development was made through a systematic literature search and compared to LiDAR literature to get an overview of the possibilities and limitations. A total of 90 papers on FSP tree models were screened and 8 papers fulfilled all the selection criteria. The parameters measured for the 8 papers were summarized and 50 structural parameters were identified. The next step was to search literature regarding the possibility of deriving these parameters with LiDAR and the accuracy of these methods. Next, to illustrate the case, two FSP models (tropical forest model and Scots Pine) were chosen and were parameterized with LiDAR-derived tree parameters. Tree characteristics were derived from Terrestrial Laser Scanner (TLS) data by using an algorithm to remove the leaves and then fitting a Quantitative Structural Models (QSM) for the woody parts. A total of 37 tropical trees and 20 Scots pines are used for the study. The LiDAR-derived tree parameters were then used as inputs for the FSP models and were compared to the FSP models with the original manually derived tree characteristics.
From literature, it was found that there is an overlap between the structural data needs for FSP models and the parameters that are derivable by LiDAR. From visual inspection of the FSP models with Lidar-derived tree traits, it was also found that LiDAR can be a good alternative tool for acquiring structural tree characteristics. However, the limitations of LiDAR are also important to keep in mind. During additional manual removal of leaves, it was found that the details of smaller branches are lost as they had to be removed because of noise from leaves and ghosting. The benefit of using LiDAR data is that the process can be automatized and scaled up, making the analysis more robust.
With the increasing popularity of LiDAR data, there is a largely untapped data source that can benefit FSP model development. The findings of this study can be an example case for future research and encourage more widespread use of LiDAR data in tree FSP modelling. Limitations found during the study can be examined to identify focus points for future research, not only for FSP modelling but also for other models that need structural tree parameters.
The climate crisis leads to a change in forest tree species distributions, favouring most likely heat and drought tolerant species. Various studies have already analysed the severity, speed and direction of shifts in species distribution. In general species will migrate further to poleward areas and higher elevation with similar conditions compared to current climate, if available. This results in many forest sites throughout Europe becoming unsuitable for drought sensitive species and hence an increase of adapted tree species, e.g. from southern Europe. As forestry is based on long-term cycles this development will impact forest condition, forest cover and silviculture negatively.
Especially areas with reduced silvicultural activities or strict silvicultural requirements, such as Natura 2000 sites, are prone to a long forest conversion process towards more suitable tree species. In these areas forestry is oriented towards tree species of natural forest habitat types with facilitation of those and prohibition of alien tree species such as drought tolerant and silvicultural important Quercus rubra or Pseudotsuga menziesii. Additionally article 6 of the Habitats Directive dictates a concept of “no deterioration” which indicates a static conservation of the prevalent flora and fauna. In combination with preserving current tree species of natural forest habitat types this will lead to severe conflicts with respect to the adaptation to climate change.
In this study we aim at analysing the changes of tree species ranges in Europe, how severe changes will impact current natural forest habitat types of Natura 2000 sites and which new tree species might be present in long-term. For that purpose, we will select appropriate bio-climatic variables using principal component analysis and ecological knowledge from a total of 25 bio-climatic variables. These variables are available at a spatial resolution of 3km and are derived from newest EURO-CORDEX climate simulations until 2100 for IPCC’s representative concentration pathways 2.6, 4.5 and 8.5. In addition JRC tree species and JRC soil type classification with 1km spatial resolution are resampled to 3km to match the bio-climatic variables spatial resolution. Those mentioned data serve as input for a generalised linear model. The model outcome, potential tree species range changes, will be compared to current definition of natural forest habitat types of Natura 2000 sites which allows a conclusion about their probability of occurrence within established regions.
Tree species distribution modelling is useful in the context of anticipatory silviculture, facilitates conservation planning and provides valuable information for policy makers. Here we want to present our preliminary results and discuss them with the scientific community. This analysis is part of a Helmholtz knowledge transfer project “forest condition monitor”.
Central European forests face major challenges in light of rapidly changing climatic conditions. Especially drought periods and heat waves promote ecological disturbances. Until lately wildfires only played a minor role in Central European temperate forests. However, an increase in both number and size of wildfires was observed in recent summers since 2018. With a higher frequency of extreme summers expected this trend is likely to continue. Fire is further projected to emerge into new regions, posing a threat to non-adapted local ecosystems and societies.
Timely preventative management actions are, therefore, necessary to address this development.
Spatially explicit data on wildfire hazard and risk is scarce in Germany. Fire behavior and hazard modeling can identify vulnerable locations. Resulting information is valuable to land managers as it supports their safety planning efforts and the effective allocation of resources. Other than countries with substantial fire history, Germany lacks detailed surface and canopy fuels data. This gap needs to be filled in order to facilitate accurate fire spread modeling. Tree height and cover are common and accessible data products in forest remote sensing. Crown base height and bulk density, on the contrary, are less widespread outside the fire modeling context. Their direct deduction from optical remote sensing data is not feasible. Yet, extensive field measurements and sensors with the ability of penetrating the top canopy can improve forest structure mapping.
Aim of this study is to characterize canopy fuels at high spatial detail. Plot-level canopy structure is computed using tree lists from National Forest Inventory (NFI) sites (n=56,000). Species-specific allometric equations relate measurements with structural characteristics. A nearest neighbor imputation analysis is carried out, matching NFI plots with forested pixels. Multi-temporal Sentinel-1 and -2 data, LiDAR-based vertical structure metrics and other biophysical variables serve as predictors.
The resulting product expands NFI point measurements to spatially continuous data sets. These will form the basis for future fire modeling efforts at local, regional or national scale. Further, they may support carbon stock calculation, biomass estimation or other ecological applications.
Area-wide information on Vegetation Height (VH) has been recognised of great value for many applications such as large-scale analysis and evaluation of various forest functions (carbon stock, timber production, biodiversity, or protection against natural hazards). This is why the Swiss National Forest Inventory (NFI) generates and provides countrywide VH maps using aerial image-based point clouds (Ginzler and Hobi, 2015). The high spatial map resolution of 0.5 m and their regular update (every six years) enable fine-scale structural analyses in the forest. However, the updating period of six years limits certain applications in which the time factor plays an important role such as disturbance and change analyses. Lang et al. (2019) showed that it is possible to map VH annually with Sentinel-2 (S2) data. They modelled the mean VH on a spatial resolution of 10 m with a deep Convolutional Neural Network (CNN) from S2 data acquired within a single year. Becker et al. (2021) extended this approach to predict multiple forest structure variables at the same time. This approach was applied to estimate the mean and maximum VH within 10 m × 10 m from S2 data using the latest high-resolution VH map of the Swiss NFI as reference training data. Annual countrywide VH maps were computed for the years 2017-2020, which are evaluated as a valuable complementary product to the high-resolution NFI VH maps.
The potential of the S2-based VH maps for the application within the framework of the Swiss NFI was evaluated based on an accuracy assessment with two independent reference datasets. The two reference datasets comprised VH from 1) visual aerial stereo-image height measurements of NFI plots and 2) derived from a countrywide Aerial Laser Scanning (ALS) campaign. Moreover, interannual differences from expected annual variation in the S2 input data were analysed between 2017-2020.
The evaluation revealed that similar VH values were obtained for the S2-based maps as for the reference data. On the large scale in particular, the VH patterns correspond well to the ALS-based VH map. However, fine structural details are missing due to an observed smoothing of the height values. This spatial blurring was explained by the use of a CNN, that models texture context over pixel neighbourhoods (Lang et al., 2019). Overall, the interannual differences proved to be small enabling change analyses between two years such as the detection of disturbed forest areas after a storm event. Difficulties in analysing changes were found in areas with more complex terrain such as the Alps due to the influence of shadows in the S2 data. The preliminary results confirm that annual countrywide VH maps based on S2 data have a great potential to serve as a valuable and complimentary source of vegetation information to the already existing spatial high-resolution VH maps of the Swiss NFI.
Ginzler, C., Hobi, M.L., 2015. Countrywide stereo-image matching for updating digital surface models in the framework of the Swiss National Forest Inventory. Remote Sensing, 7 (4), 4343-4370. https://doi.org/10.3390/rs70404343
Lang, N., Schindler, K., Wegner, J.D., 2019. Country-wide high-resolution vegetation height mapping with Sentinel-2. Remote Sens. Environ. 233, 111347. https://doi.org/10.1016/j.rse.2019.111347
Becker, A., Russo, S., Puliti, S., Lang, N., Schindler, K., Wegner, J.D., 2021. Country-wide Retrieval of Forest Structure From Optical and SAR Satellite Imagery With Bayesian Deep Learning. Under review
Since 2018, Germany has undergone considerable forest damage due to storms, extreme drought, and insect pests, especially bark beetle infestations. According to a national survey by the Federal Ministry of Food and Agriculture (BMEL), the volume of damaged wood in the past three years amounts to 171 million m³ and a damaged area of 277,000 ha to be reforested [1]. Crisis management of forest damage and related decisions of politics, economy and society depend on precise and fast information on affected areas, amounts of damage and a financial evaluation. This task is to be implemented with the project "Remote sensing based national forest damage monitoring system".
In this project, an international consortium, consisting of seven partners from the fields of forestry, science and technology, is developing a monitoring system based on Copernicus Sentinel data for the detection of forest damages in Germany. Forest change detection algorithms face a number of challenges, such as robustness to outliers, dynamic adaptation to annual and spatial variations in phenology, irregular data availability and high sensitivity to forest damage. To account for both intra- and inter-annual variations associated with shifts in phenology due to different climatic conditions, we apply a near real-time change detection method on Sentinel-2 data based on stochastic modelling, combining a structural time series with the Kalman filter [2]. Thus, the model is dynamically adapted to phenological deviations and increases the ability to separate vital from damaged forest areas. In addition, the separability of different causes of damage will be investigated. For the rapid detection of storm damage, the potential of Sentinel-1 SAR data will be analysed and operational procedures will be developed [3].
The method is tested and optimised in selected study areas in Germany, covering different forest types, forest structures and damage events. Reference and validation data of historical and current damage events is provided by the participating project partners in Saxony, Bavaria, Lower Saxony, and Baden-Wuerttemberg.
The presentation gives an overview of the current state of methodological developments of the Sentinel-2 time series analysis and storm damage detection with Sentinel-1 SAR data. Beyond that, first results of damage detection from the study areas will be presented.
[1] https://www.bmel.de/DE/themen/wald/wald-in-deutschland/wald-trockenheit-klimawandel.html
[2] M.Puhm et al. (2020) A Near Real-Time Method for Forest Change Detection Based on a Structural Time Series Model and the Kalman Filter
[3] M.Rüetschi et al. (2019) Rapid Detection of Windthrows Using Sentinel-1 C-band SAR data
A severe dieback of Central European forests has occurred since 2018 due to extremely hot and dry weather conditions in 2018 and 2019 and interlinked heavy bark beetle infestations throughout the period. The times and intensities of disturbances vary spatially, and they are - in addition to temperature and precipitation - influenced by environmental factors such as elevation, slope or soils. A better understanding of the drivers of forest dieback is required to mitigate effects of extreme weather conditions on forests in the future. This can be acquired through annual tree fraction maps that approximate forest cover fractions as continuous gradients. Based on this information, as e.g. derived from regression-based unmixing, gradual trends and abrupt disturbances can both be quantified between years and hence related to drivers of degradation (e.g. Senf et al. 2020, Remote Sensing of Environment, 240, 111691).
Tree phenology differs by species, latitude, elevation and other spatial factors. Therefore, the full growing season needs to be integrated into mapping efforts. Spectral-temporal metrics (STM) are derived from pixel-wise statistics of reflectance bands in a pre-defined period. They capture phenological differences and allow for consistent mapping of large areas. STM were shown to work as input for regression-based unmixing approaches (Schug et al. 2020, Remote Sensing of Environment, 246, 111810). However, previous works only used STM of a given year to create regression models to derive fractional cover maps for that same year, i.e., multi-annual models with temporal transferability were not closely investigated yet.
In order to map gradual changes in forest cover for northern Germany, we have used a data cube including all Sentinel-2 images acquired between 2017 and 2020. We applied a Neural Network regression with synthetically mixed training data to estimate annual forest fractions. Synthetically mixed training data has been proven highly effective when quantifying land cover fractions (Okujeni et. al. 2013, Remote Sensing of Environment, 137, 184-197). For this approach, pure endmember pixels were collected with sets of annual STM (25%, 50%, 75%, 90%-quantiles of Sentinel-2 reflectance bands). STM for the pure pixels were then linearly mixed into mixed STM “spectra” with known mixing fractions as labels. Finally, we combined mixed STM from multiple years to train a temporarily-generalized Neural Network regression model to estimate annual forest fraction for all years with a single model.
As a result, we mapped annual forest fractions for northern Germany in the 2017-2020 period. The validation results show that forest fractions can be mapped with fairly low errors using temporally generalized models (mean absolute error < 20%) leading to information beyond results from discrete image classification. Degradation and re-growth events were detected in the annual forest fraction time series over the whole of study area. This included the detection of severe bark beetle infestations, which, however, were partly misinterpreted after full dieback. In some areas, fraction estimates also showed lower accuracies due to sparse temporal density in the data cube. Our study highlights the strength of the regression-based unmixing with a single model of annual Sentinel-2 STM and discuss remaining challenges and next steps for multi-annual, large area forest mapping.
Digitization is affecting everything and offers many new possibilities. In principle, digitization is a simple technical transformation of physical properties into electrical signals. This transformation improves the capability of storing and spreading this signals or data, which is also an important task in the field of remote sensing. Since the opening of the Landsat archive in 2008 and the start of ESAs Copernicus missions with free data policies, this task is becoming increasingly important, and digitization is the solution.
However, there is a gap, between possibilities and potential. Forests are complex ecosystems, highlighting the need to connect all experts in every field connected to forestry including remote sensing for a deeper understanding of these biomes. Technological progress can support this, e.g. by adapting well known processes known from business intelligence and data warehousing to a centralized data center, e.g. realized by Google Earth Engine (GEE). An global online analytical processing cube. However, using these platforms is sometimes no solution, especially when own data or workflows should be implemented. This is the case, for example, when thinking about forest inventories which are directly linked to needed carbon stock estimations, connected to the Kyoto-Protocol and contractual agreements. There is a need that these tracts are not known publicly to prevent manipulation of the stock. Another example is using own measurements within a project or when utilizing proprietary products like detailed land classifications, geophysical or biological measurements as well as drone data.
ESAs Copernicus mission offers a chance to connect individual and regional projects to a large variety of free and open satellite data with an unprecedented temporal and spatial resolution. Furthermore, the used measurement units provide a large range of the electromagnetic spectra which can be used for gaining information directly, or when correlating them with additional data. Passive optical, high-resolution products from the Sentinel-2 mission.
Here, we want highlight the process of utilizing a very detailed land cover classification in a Central European biosphere reserve (Rhoen), by exploiting the spectral information of the Sentinel-2 mission in the framework of Digital Earth Australias Open Data Cube. This is done by, 1.) semi-automatic, multitemporal, parallel feature extraction, 2.) applying a machine learning model, and, 3.) evaluating accuracies by parallel performed repetition of the applied machine learning model.
Remote sensing has become a most versatile technique for the information retrieval. There is neither one sensor nor one platform for all conceivable applications, but a multitude of possible sensing systems for each application. The sensors differ in the wavelength (visible light, infrared radiation, microwaves), the illumination (active or passive) and the recording geometry (point cloud, central projection, parallel projection, distance projection). The choice of platforms reaches from fixed cameras mounted at buildings over unmanned aerial vehicles (UAV), airplanes and helicopters to satellites and space stations. The two basic distinctions resulting from the choice of platform are the distance to the observed object and the repetition rate, i.e., the temporal interval in which the same image acquisition is repeatable in order to detect changes. If one wants to make use of this wide range of data in all its details, advanced approaches to data fusion and automatic interpretation are essential. Machine learning methods were established for this purpose - not only in the remote sensing context – but in turn, require extremely large training datasets to learn the syntactic patterns and link them to their semantic meaning. Such training datasets are still rare in the remote sensing domain, since the reference data for the unambiguous assignment of a meaning in the form of so-called labels are mostly missing. Their acquisition is extremely time and cost-intensive and exceeds the usual budget of a remote sensing project.
The project "Wald5Dplus" – forest mapped in five dimensions plus labels – is supported by the German Space Agency within the German Aerospace Center (DLR) with funds from the German Federal Ministry for Economic Affairs and Energy. The goal is to create a syntactic training dataset from multi-temporal Sentinel-1 & -2 satellite images and to assign semantic labels to the individual elements, which were been derived from aerial surveys. The satellite missions Sentinel-1 (C-Band Synthetic Aperture Radar) and Sentinel-2 (Multispectral Imager) are part of the Copernicus program of the European Commission and the European Space Agency. They provide (in the case of Sentinel-2, unfortunately, weather-dependent) weekly images in a 10 m grid of the whole of Europe, which are freely available to any potential user. In one year, about sixty images per satellite are collected. In the context of this project, the single measurements over three selected forest areas in north-south direction (first dimension), in east-west direction (second dimension), polarimetrically by Sentinel-1 (third dimension), spectrally by Sentinel-2 (fourth dimension) over time (fifth dimension) are summarized in one Analysis Ready Data Cube and provided together with semantic labels ("plus"). The labels originate from aerial surveys of the test areas with aircraft- or UAV-borne laser scanners and multispectral cameras and therefore, have a very high spatial resolution. The evaluation of the point clouds was carried out with especially developed, patented algorithms and provides the required semantic labels, which are then aggregated to the spatial grid of the satellite data. Based on the data cube, machine learning methods are pre-trained to be used in other projects. The complete package will be made available as a benchmark dataset via the CODE-DE platform – integrated in the ML4Earth – project to all interested scientists worldwide free of charge.
At the end of the project period, Analysis Ready Data Cubes with weekly recordings of Sentinel-1 and Sentinel-2 over the period of two years will be available for three study areas in the Bavarian Forest National Park, in the State Arboretum "World Forest Freising" and in the Steiger Forest. In order reduce storage demand and to allow easier interpretation with standard software, the images are polarimetrically, spectrally and temporally fused on hypercomplex bases to one consistent ARD cube [2]. In addition, each image element will be associated with a typical forest parameter such as the number of trees, proportion of deciduous or coniferous trees, mean canopy height, crown volume, etc. [3]. This data set is analysed in detail using methods of multivariate statistics [1] and machine learning in order to optimise the data processing on the one hand and to enable the transfer of the learned knowledge to other areas on the other hand. All results will also be published in open access international journals. The data set, including pre-trained algorithms, will henceforth serve as a benchmark against which new developments in the field of artificial intelligence can be checked. In the optimal case, a sufficiently validated AI algorithm can be offered that is able to estimate the forest parameters introduced solely based on annual time series from the Sentinel 1 & 2 missions.
This benchmark dataset is designed to consistently rank AI algorithms in the future. They all are trained on the same Analysis Ready Data Cube with the same labels and predict the forest parameters for the same syntactic signatures. Then, only accuracy of the assignment and performance of the algorithm decide about the respective rank on the list. In this way, the "state of the art" can be evaluated objectively for the first time. For example, one could select the optimal algorithm from the list of possible candidates in order to carry out a thematically high-resolution forest mapping based on the freely accessible data of Sentinel-1 & -2. Unexpected changes such as forest damage due to diseases, pest infestations or storms could also be detected in a relatively short time in order to be able to take countermeasures to prevent further spread and thus greater financial damage. Methodologically, the benchmark dataset provides the optimal starting point for both the validation of already published algorithms (e.g., the data fusion on hypercomplex bases [2] or the thematic interpretation via histogram classification [4]) as well as for the development of completely new or the adaptation of existing machine learning algorithms (Random Forest, Convolutional Networks, Support Vector Machines etc.). Though their application focus lies on the interpretation of remote sensing data over forests, the methodological benefit from this benchmark data set is expected go far beyond this single application.
Literature
[1] Hauser, S. & Schmitt, A. (2021): Glacier Retreat in Iceland Mapped from Space: Time Series Analysis of Geodata from 1941 to 2018. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science. DOI: 10.1007/s41064-021-00139-y.
[2] Schmitt, A.; Wendleder, A.; Kleynmans, R.; Hell, M.; Roth, A. & Hinz, S. (2020): Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases. Remote Sens. 2020, 12, 943. (https://www.mdpi.com/2072-4292/12/6/943)
[3] Krzystek, P.; Serebryanyk A.; Schnörr, Cl.; Cervenka, J. & Heurich, M. (2020): Large-scale mapping of tree species and dead trees in Šumava National Park and Bavarian Forest National Park using lidar and multispectral imagery. Remote Sensing 2020, 12(4), 66.1. (https://doi.org/10.3390/rs12040661)
[4] Schmitt, A.; Sieg, T.; Wurm, M. & Taubenböck, H. (2018): Investigation on the separability of slums by multi-aspect TerraSAR-X dual-co-polarized high resolution spotlight images based on the multi-scale evaluation of local distributions, International Journal of Applied Earth Observation and Geoinformation, Volume 64, February 2018, Pages 181-198, ISSN 0303-2434. (https://doi.org/10.1016/j.jag.2017.09.006)
Canopy height is a crucial indicator to estimate biomass/carbon stocks, and can be monitored by forest inventories at plot scale. However, traditional inventories are labor-intensive time consuming and challenging to implement in remote area, and often ignore trees outside forests. Airborne LiDAR data are currently the most reliable way to retrieve canopy height for individual trees at country scale, but the high costs limit its large-scale application at a repeated basis. Consequently, most aerial LiDAR data are collected patch-wise over different places and years. Existing large scale forest canopy height maps are generated using GEDI and Landsat data and are not suited to assess canopy heights at tree level and trees outside forests, since these objects are not discernible in lower resolution satellite images.
Here we introduce our work aiming towards a continental-scale estimation of canopy height at the level of individual trees. We make use of novel deep learning techniques to generate annual canopy height maps for Europe at 1 m spatial resolution using PlanetScope imagery. Our model was trained and validated with nearly one million km2 aerial LiDAR canopy height data collected from various European landscapes, including forest, farmland, grassland, urban area, etc. And also different forest types, including shrubland, coniferous forest, broadleaf forest, etc. It can be the pre-trained base model and transfer-learning to other landscapes or continents where we have no much aerial LiDAR reference data. We show examples and results transferring the model to tropical and semi-arid regions, and further demonstrate the possibility of enlarging our research towards global scale with temporal dimensions. Our canopy height maps give new insights into tree-dominated ecosystems, especially for the areas where there are many non-forest landscapes with considerable tree cover, such as in cropland, desert and savanna, etc. Further assessments and analyses such as distinguishing different tree types and possibly their biomass can be implemented based on our results.
Tropical forests play essential roles in global carbon storage. Characterizing tropical forest regrowth provides important information for understanding forest dynamics to support sustainable forest management and climate change mitigation strategies. However, the effect of environmental and human factors on tropical forest regrowth is rarely investigated. Here, we aim to analyze how selected environmental factors (e.g. year since disturbance, climatic water deficit and fire frequency) and human factors (distance to roads, distance to settlements) affect the forest regrowth which is represented by proxies such as tree height, aboveground biomass and tree cover. Based on the secondary forest age map of Brazil (Silva Junior et al., 2020), 3060 pixels were sampled in different biomes of Brazil. We interpreted the year since disturbance for the sampled pixels based on the normalized difference vegetation index (NDVI) time series trajectories and the true color composite of Landsat images acquired from 1984 to 2019. Elevation, slope, climatic water deficit, total phosphorous content of soil, total nitrogen content of soil, surrounding tree cover area, distance to roads, distance to settlements and fire frequency were imported and analyzed through Google Earth Engine (GEE). Distance to roads and distance to settlements were calculated using the Near Function of ArcGIS 10.6.1. Height, AGB and Tree cover are extracted from the global forest canopy height map (Potapov et al., 2021), global forest above-ground biomass map(Santoro et al., 2020) and the global 30m Landsat Tree Canopy Map (Sexton et al., 2013), respectively. We applied mixed-effects models to examine the effects of selected environmental and human factors on the height, aboveground biomass and tree cover. We found that year since disturbance interpreted from satellite time series is critical for characterizing forest regrowth. Our preliminary results indicate that human factors influence forest regrowth, suggesting that taking human factors into consideration is important while developing tropical forest restoration strategies. Our ongoing results also highlight that remotely sensed products open up new opportunities for understanding forest dynamics and integrating remote sensing data from multiple sources is promising for understanding how different factors affect tropical forest regrowth and assist in the development of effective intervention strategies.
Potapov, P., Li, X., Hernandez-Serna, A., Tyukavina, A., Hansen, M. C., Kommareddy, A., … Hofton, M. (2021). Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sensing of Environment, 253(August 2020), 112165. https://doi.org/10.1016/j.rse.2020.112165
Santoro, M., Cartus, O., Carvalhais, N., Rozendaal, D., Avitabilie, V., Araza, A., … Willcock, S. (2020). The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations. Earth System Science Data Discussions, 5174(July), 1–38. https://doi.org/10.5194/essd-2020-148
Sexton, J. O., Song, X. P., Feng, M., Noojipady, P., Anand, A., Huang, C., … Townshend, J. R. (2013). Global, 30-m resolution continuous fields of tree cover: Landsat-based rescaling of MODIS vegetation continuous fields with lidar-based estimates of error. International Journal of Digital Earth, 6(5), 427–448. https://doi.org/10.1080/17538947.2013.786146
Silva Junior, C. H. L., Heinrich, V. H. A., Freire, A. T. G., Broggio, I. S., Rosan, T. M., Doblas, J., … Aragão, L. E. O. C. (2020). Benchmark maps of 33 years of secondary forest age for Brazil. Scientific Data, 7(1), 1–9. https://doi.org/10.1038/s41597-020-00600-4
The diameter at breast height (DBH) is an important forest parameter used for estimating wood supply, biomass and stem growth rates. Therefore, it is of great interest as input data in areas such as forestry and forest inventory, ecological monitoring and climate research. Traditionally, this data is collected within regular time intervals during field campaigns measuring the DBH manually with calipers. In general, a small subset of trees is selected for the data collection, information about the whole forest stand is obtained by extrapolation. This method is time-consuming and tends to be accompanied by great uncertainties as the sample might not be representative for a heterogenous forest stand. LiDAR techniques such as terrestrial laser scanning (TLS) have been shown to provide high-resolution data for deriving forest parameters with high accuracies. Nevertheless, the use of TLS is not feasible for large areas and as the equipment is cost-intensive it is not accessible for every user. Unmanned aerial vehicles (UAV) as a cost-effective method for deriving different types of forest parameters are limited in their application for analyzing vertical forest structures, such as stems, which is a consequence of their nadir flight pattern above the canopy. UAV-derived point clouds generally do not provide detailed information about the stems.
In this study the mentioned limitations of UAV data should be minimized by focusing on the cast shadows of tree stems on the forest floor. Data is acquired over a deciduous forest stand near Jena, Germany, during leave-off state and sunny weather conditions – two prerequisites for data collection to detect the shadow on the ground. Using structure from motion (SfM), a point cloud is generated from the acquired UAV images and normalized with respect to the relief. Points belonging to the tree canopy and stems are removed resulting in an orthomosaic image containing only ground information. In a second step, methods from the research field of deep learning and object-based image analysis (OBIA) are tested to achieve an automatic detection and delineation of cast shadows. As the shape of the cast shadow and of the stem are correlated, parameters such as DBH can be derived from the detected shadows.
Essential environmental issues, like climate change and biosphere integrity, naturally induce a legitimate social interest and the need for reliable information by both general public and policy makers, which must be focused on the appropriate spatial scale (from global to local) by the scientific community and government agencies. The availability of an increasing flux of Earth Observation data already allows the extraction of useful information from different sources, including direct information on some ecosystem processes in response to drivers of changes. Within this frame of reference, EO data offers many opportunities for developing new methods of analysis in environmental sciences which might outdo the traditional monitoring systems and conservation management. The example of forest ecosystems is a point in case: forests are crucial elements at the socio-economic and ecological level and are increasingly under threat both from biotic and abiotic disturbances (e.g. climate change, wildfires, pest infestations, droughts, illegal logging, etc.). The continuous, real-time monitoring of forest ecosystems would allow the deployment of efficient mitigation and restoration policies. Here a novel approach for an environmental monitoring system to support the assessment of potential damage and recovery rates on terrestrial ecosystems in Italy is briefly described. The method is based on the use of a spatially explicit ecosystems classification, based on machine learning, combined with disturbance mapping. An application here presented concerns the forest areas hit by wildfires during the exceptional 2021 summer season. The procedure is summarized as follows: (i) determination of the forest types distribution model by means of environmental data and a large ensemble of multitemporal Sentinel-2 imagery, processed by a supervised Random Forest (RF) classifier; ii) estimate of the different forest types spatial coverage in Italy; (iii) usage of the high resolution layer of burned area DataBase from the European Forest Fire Information System, EFFIS, which, by the way, is not able to resolve the actual type of forest ecosystem involved.
The combination of the two different products ii) and iii) provides information at high spatial resolution on the percentage of potentially damaged forest areas due to wildfires for each special class of ecosystem considered. The method is applied not only at national and regional scale, but also focused on designated protected areas (Natura 2000 network, national and regional parks, old-growth forests site, etc.).
The described approach is the basis for the realization of a standard product at national scale named “Changes in Italian Terrestrial Ecosystems (CITE)”, a general purpose monitoring system aimed at supporting forest management and restoration actions. This product could eventually be of use for the Forest monitoring Information System for Europe (FISE).
Australia’s forests have been subject to severe fire events in recent times. In particular, extreme drought in 2019 lead to wide-scale bushfires across eastern Australia, resulting in millions of hectares of forest being destroyed. While optical remote sensing data provides an affordable means of determining the extent of fire damage, residual smoke haze can make it difficult to acquire post-fire imagery in a timely manner. Synthetic Aperture Radar (SAR) data are not affected by smoke haze or cloud cover, providing a suitable option for rapid fire damage assessment. Furthermore, radar backscatter provides structural information about vegetation and multi-temporal SAR can provide insight into how the vegetation changes through time.
NovaSAR-1 and Sentinel-1 SAR, along with Sentinel-2 optical imagery are used to assess and compare how each of these sensors provide information about fire scar extent, severity, and bushfire recovery. NovaSAR-1 is an S-band SAR satellite designed and manufactured by Surrey Satellite Technology Ltd, UK (SSTL). As the only S-band (3.2 GHz; wavelength 9.4 cm) SAR in operation, it is a novel addition to the current suite of international civilian SAR satellites that operate in X-, C- or L-bands. In 2017, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) joined the NovaSAR-1 mission partnership on behalf of Australia, securing a 10% share in the acquisition and tasking capacity of this new satellite. Sentinel-1 and Sentinel-2 are routinely available and provided through the European Space Agency.
Two bushfires are studied as part of this analysis, one close to Jervis Bay, New South Wales (December 2019 – January 2020); and one on World Heritage Listed Fraser Island, Queensland (October – December 2020). NovaSAR-1 data products were available pre- and post-bushfire, with 50m and 30m ScanSAR products for Jervis Bay and Fraser Island, respectively. These data were processed to Gamma0 using the commercial software Gamma. Sentinel-1A Interferometric Wideswath Ground Range Detected data were obtained from the Sentinels Australasia Regional Access (SARA) hub and processed to Gamma0 using SNAP. Sentinel-2 data were obtained through Digital Earth Australia as Nadir-corrected BRDF (Bidirectional reflectance distribution function) Adjusted Reflectance.
The SAR imagery were co-registered using SNAP and subset to cover a common region for each case study. The post-fire Sentinel-2 imagery was used to identify and extract training sites based on the dominant classes, including burnt areas. A Random Forest Model classifier was then used to classify the Sentinel-1, and NovaSAR-1 pre- and post-fire image stacks. For Jervis Bay, the Sentinel-1 bands were VH-only and for NovaSAR-1 it was HV-only. For Fraser Island, dual-pol Sentinel-1 bands (VH and VV), and tri-pol NovaSAR-1 bands (HV, VV and HH) were used. Sentinel-2 true colour and Normalised Difference Vegetation Index (NDVI) bands from before and after the fires were also used in a Random Forest Model classification for comparison, or ‘ground-truth’, for assessing SAR fire-scar extents.
The fire on Fraser Island burned for approximately 2-months between October and late December 2020, affecting over half of the island. A comparison of the Sentinel-2 and NovaSAR-1 classifications showed moderate agreement (with a Kappa statistic of 0.51). The NovaSAR-1 classification of bushfire scar was less accurate in areas of steep terrain. The Sentinel-2 and Sentinel-1 classification comparison also showed moderate agreement (with a Kappa statistic of 0.51), however the Sentinel-1 classification of bushfire scar was less accurate in areas where the vegetation was more open.
For the Jervis Bay region, a comparison of the Sentinel-1 and Sentinel-2 fire scar extent maps show a moderate agreement (0.57) according to the Kappa statistic. A comparison of the NovaSAR-1 and Sentinel-2 fire scar extent maps has a poor agreement with a Kappa statistic of 0.37. This occurs in an area where the NovaSAR-1 backscatter increases after the fire – unlike the rest of the fire scar. Reasons for this are currently unknown but could include effects of precipitation.
To investigate how forest recovers from the bushfire around Jervis Bay, additional NovaSAR-1 (HV and HH), Sentinel-1 (VH and VV) and Sentinel-2 (NDVI) images were extract from before the fire, to 10 months after the fire (or eight months for NovaSAR-1). Ten sites unaffected by the fire, and ten sites burnt by the fire, were established using a cloud-free post-fire Sentinel-2 image. Changes to NDVI, cross-polarised radar backscatter and the dual-pol Radar Vegetation Index (RVI) were examined through time to see how the recovery of the burnt sites compared to the unburnt sites through time. The Sentinel-2 NDVI shows a large drop in NDVI for the burnt sites following the bushfire (it is worth noting that the first smoke-free image following the fire is two months after the burn date). As expected, the NDVI of the burnt sites gradually increase through time, to be slightly lower than the unburnt sites 10 months after the fire.
The Sentinel-1 VH backscatter drops after the bushfire event, and gradually increases to be similar to the unburnt sites by 10 months. This is possibly due to the backscatter from the C-band Sentinel-1 SAR being related to the recovery of the leaves and twigs within the trees. The NovaSAR-1 HV shows both an increase and decrease in backscatter at different burn sites following the bushfire, which is difficult to interpret. However for the severely burnt sites, the NovaSAR-1 RVI shows a dramatic drop after being burnt, which increases slightly after eight months. The Sentinel-1 RVI also shows a reduction after the burn date, also increasing during the months following the burn.
This investigation has shown the use of Synthetic Aperture Radar at both C-band (Sentinel-1) and S-band (NovaSAR-1) can map bushfire scar extent to a moderate accuracy. This could be useful should a rapid assessment of fire scar extent be required before the smoke haze subsides. The use of SAR cross-polarised backscatter and the Radar Vegetation Index may also provide additional information when assessing forest recovery following a bushfire, and is worthy of further investigation.
Spatially explicit information on forest management at a global scale is critical for understanding the status of forests and for planning sustainable forest management and restoration. Here, we produce the first reference data set and a prototype of a globally consistent forest management map, which provides high level spatial detail on the most prevalent forest management classes such as intact forests, managed forests with natural regeneration, planted forests, short rotation woody plantations, oil palm plantations, and agroforestry. We developed the reference data set of 226K unique locations through a series of expert and crowdsourcing campaigns using the Geo-Wiki (https://www.geo-wiki.org/) application. We then combined the reference samples with time series from PROBA-V satellite imagery to create a global wall-to-wall map of forest management at a 100 m resolution for the year 2015, with forest management class accuracies ranging from 58% to 80%. The reference data set and the map present the status of forest ecosystems and can be used for investigating the value of forests for species, ecosystems and their services.
Being vital to many of the Earth’s ecosystems, forests provide a variety of functions, such as providing habitat for animals and plants, protecting watersheds and preventing soil erosion. Reliable and frequently updated information on forest resources and it’s condition is needed for analysis of patterns and trends, sustainable forest management as well as for a large number of other applications. Nowadays, terrestrial in-situ observations are complemented by remote sensing techniques. Airborne campaigns with multispectral cameras or Light Detection and Ranging (LiDAR) are carried out and provide area-wide spatial data of many forest parameters such as forest cover, forest type and composition or above-ground biomass. Due to the high cost of these campaigns, the temporal resolution is still in the range of 3 to 10 years or maps are not regularly updated at all. Spaceborne remote sensing data help to bridge the gaps in temporal resolution and spatial coverage.
Bruggisser et al., 2021 [1] showed the potential of complementing the ALS data with sparse temporal resolution with Sentinel-1 to regularly update the ALS based forest structure information. They used Sentinel-1 to derive forest structural parameters, namely the stand height and fractional cover, using a random forest (RF) model trained on LiDAR-derived estimates with statistical parameters based on Sentinel-1 backscatter and interferometric coherence time series as input features. The study was conducted for a temperate deciduous forest in a hilly study area near Vienna, Austria, and the model was trained and validated using airborne laser scanning (ALS) data
In this study, we tested the viability of this approach in coniferous forests and very challenging environment in an alpine study region located in Tyrol, Western Austria. A RF model was trained to predict forest height and gap fraction from Sentinel-1 at a spatial resolution of 100 m. The results yielded Pearson correlation coefficients (r) of 0.77 for forest height and 0.82 for gap fraction (compared to 0.88 for forest height and 0.94 for gap fraction in case of the results published by Bruggisser et al., 2021 in broadleaf forests and hilly terrain). Optimisation of the model predictors to better compensate for the strong terrain effects in the alpine study region and to better fit the seasonality of the predominantly coniferous forests can further increase r to 0.83 and 0.90 for forest height and gap fraction, respectively, in case of the Tyrol study area. The results imply, that even in such a challenging environment, Sentinel-1 data can complement sparse ALS acquisitions and provide yearly estimates of forest structural
[1] Bruggisser, Moritz, Wouter Dorigo, Alena Dostálová, Markus Hollaus, Claudio Navacchi, Stefan Schlaffer, and Norbert Pfeifer. "Potential of Sentinel-1 C-Band Time Series to Derive Structural Parameters of Temperate Deciduous Forests." Remote Sensing 13, no. 4 (2021): 798.
The Amazon ecosystem plays an important role in the stability of the planetary climate and the preservation of biodiversity. Deforestation has long been known to be a major source of carbon emission, and recent studies have shown that forest degradation contributes three times more to carbon loss from aboveground gross biomass than deforestation in the Brazilian Amazon. New environmental targets were set up in the recent COP26 to reduce anthropogenic CO2 emissions to mitigate climate change. In the case of Brazil, about 65% of CO2 emissions in 2019 came from deforestation and forest degradation, which together are the second-largest anthropogenic sources of CO2 emissions in the atmosphere in the country. Therefore, mapping, and quantifying deforestation and forest degradation at large scales with low uncertainty is essential, and Earth observation tools and methods are essential to monitor and estimate these ecosystem changes over time. In this study, we provide an overview of case studies applying remote sensing for deforestation and forest degradation mapping using different sensing sources (LiDAR, RADAR and optical) over the Brazilian Amazon. We focus on the loss of biomass due to edge effects; detection and quantification of selective logging using Synthetic Aperture Radar (SAR) data in the X-band; discrimination of successional stages and forest degradation using SAR data in the L band; forest gaps and tree mortality using LIDAR; and carbon dynamics estimates also based on LiDAR. Wiederkehr et al. (2020), used L band SAR images from ALOS/PALSAR-2 to discriminate forest successional stages, forest degradation, and land use classes in the Central Brazilian Amazon. The method was able to discriminate different types of tropical forests, with accuracies above 77%, and forest degradation by fire with accuracies above 85%. Kuck et al. (2021) obtained accuracies above 85% in detecting selective logging in the Brazilian Amazon using bitemporal X-band SAR images). Using multi-temporal LiDAR, Moura et al. (2020) showed that human-modified forests in the Eastern Amazon act more as a source of carbon than a sink, even when considering regeneration processes, having strong negative implications for carbon emission scenarios in tropical forests. By combining airborne LiDAR and a dataset of forest edge age at 30-m spatial resolution, Silva Junior et al. (2020) found that carbon stocks decreased by 37% within five years after the appearance of forest edges. From this, the authors estimated that the Amazon basin may have lost 947 Tg C during the 2001-2015 period due to edge effects, equivalent to one-third of the losses from deforestation (2592 Tg C). LiDAR data have also been used across the Brazilian Amazon to map canopy gaps related to tree mortality, showing that southern Amazon forests were more open and dynamic that the rest of the Amazon, likely due to combined effects from degradation, soil fertility and water deficit (Dalagnol et al., 2021). These case studies emphasize the sensitivity of Amazonian forests to degradation and deforestation and demonstrate how well modern remote sensing methods can detect and quantify specific targets such as illegal logging, fragmentation, edge effects, and degradation. We thus demonstrate how novel remote sensing approaches, paired with machine learning algorithms, can support stronger diagnosis of forest ecosystems. This will in turn allow scientists to be more efficient in providing nature-based solutions to reverse the effects of deforestation and degradation, preserving, or re-establishing healthy forest structure, functioning and biodiversity. This is the direction we should take in the upcoming years: using cutting-edge science and technology to address issues such as deforestation and forest degradation and prioritising the recovery and conservation of Amazon rainforests and tropical forests worldwide.
Key words: forest degradation, deforestation, Amazon, LIDAR, SAR
References
Wiederkehr, N.C.; Gama, F.F.; Castro, P.B.N.; Bispo, P.C.; Balzter, H.; Sano, E.E.; Liesenberg, V.; Santos, J.R.; Mura, J.C. (2020) Discriminating Forest Successional Stages, Forest Degradation, and Land Use in Central Amazon Using ALOS/PALSAR-2 Full-Polarimetric Data. Remote Sensing, 12, 3512. https://doi.org/10.3390/rs12213512
Kuck, T. N., Sano, E. E., Bispo, P. C., Shiguemori, E. H., Silva Filho, P. F. F., & Matricardi, E. A. T. (2021). A Comparative Assessment of Machine-Learning Techniques for Forest Degradation Caused by Selective Logging in an Amazon Region Using Multitemporal X-Band SAR Images. Remote Sensing, 13(17), 3341. https://doi.org/10.3390/rs13173341
Dalagnol, R., Wagner, F. H., Galvão, L. S., Streher, A. S., Phillips, O. L., Gloor, E., Pugh, T. A. M., Ometto, J. P. H. B., & Aragão, L. E. O. C. (2021). Large-scale variations in the dynamics of Amazon forest canopy gaps from airborne LiDAR data and opportunities for tree mortality estimates. Scientific Reports, 11(1), 1388. https://doi.org/10.1038/s41598-020-80809-w
Moura, Y. M. D., Balzter, H., Galvão, L. S., Dalagnol, R., Espírito-Santo, F., Santos, E. G., Garcia, M., Bispo, P. C., Oliveira, R. C. & Shimabukuro, Y. E. (2020). Carbon dynamics in a human-modified tropical forest: a case study using multi-temporal LiDAR data. Remote Sensing, 12(3), 430. https://doi.org/10.3390/rs12030430
Silva Junior, C.H., Aragão, L.E., Anderson, L.O., Fonseca, M.G., Shimabukuro, Y.E., Vancutsem, C., Achard, F., Beuchle, R., Numata, I., Silva, C.A. and Maeda, E.E., 2020. Persistent collapse of biomass in Amazonian forest edges following deforestation leads to unaccounted carbon losses. Science advances, 6(40), p.eaaz8360. https://doi/10.1126/sciadv.aaz8360
Global measurement of plant functional traits like canopy height, canopy cover and Plant Area Index (PAI) profiles form a key input for many emerging fields in ecology and meteorology. Here, we test the ability of an ICESat-2 simulator, based on the GEDI simulator presented by Hancock et al. 2019, to replicate measurements of plant functional traits retrieved from ICESat-2 observations, and through that explore the ability of ICESat-2 to measure plant functional traits not currently in the ATL08 product.
The simulator takes Airborne Laser Scanning (ALS) data, produces a pseudo-waveform and then Poisson-samples individual photons to replicate real ICESat-2 measurements. Because the simulator assumes that the ICESat-2 photon-cloud distribution is proportional to the ALS, which has been shown to be sensitive to canopy cover and structure, an accurate ICESat-2 simulator would indicate that ICESat-2 is sensitive to plant functional traits.
Simulated ICESat-2 measurements have applications in training biomass models and building radiative transfer models for field plots that are not intersected by real ICESat-2 data. They also facilitate investigations into the uncertainties and sampling errors in ICESat-2 measurements, and into the theoretical limits of ICESat-2 vegetation measurement.
Because ICESat-2 measurements are made at 532 nm wavelength, but ALS data tends to be collected at longer wavelengths, the simulator must account for reflectance differences between canopy and ground in order to retrieve the correct photon distribution. Airborne laser scanning data is used to classify real ICESat-2 photons as ground, vegetation or noise, from which the key simulation parameters - pure vegetation and pure ground photon rates - are calculated. ICESat-2 tracks that intersect ALS measurements are identified and simulated, allowing for one-to-one comparison of simulated and observed ICESat-2 plant functional trait measurements. This is performed across a range of sites to cover different forest types and to test the simulator accuracy when using a range of ALS data sets.
The plant functional trait metrics calculated from simulated and observed ICESat-2 photons are similar, with an average canopy height bias of –30cm and canopy cover bias less than 5% for all sites, indicating that ICESat-2 is sensitive to stand-level plant functional traits. Noise and differences between ground and canopy reflectances are found to be two key influences on the accuracy of ICESat-2 plant functional trait measurement. This research suggests that, with global mapping of ground and canopy reflectances, it is possible to derive plant functional trait measurements from ICESat-2 data.
Retrieval of soil parameters from satellite observations at L-band in forested areas requires estimation of forest contribution to measured signal. In boreal forests, most of the vegetation effect comes from branches. Typically forest vegetation optical depth at L-band (L-VOD) is assumed to vary only seasonally but not in short term. However, recent results (Li et al., 2019; Roy et al., 2020) show that boreal forest canopy electrical properties change as a function of temperature due to gradual freezing/melting of water. This has important
implications for remote sensing of the cryosphere, especially for retrieving soil or snow parameters in boreal forests from L-band data.
A new model for L-VOD as a function of canopy temperature has been developed (Schwank et al., 2021). It is a physical model, describing trees as a mixture of air, liquid water, ice and wood cells. This model allows for calculation of L-VOD as a function of canopy temperature,
frequency, parameters describing the canopy (porosity, column-mass, height, density, fraction of branches of total biomass), and modeling parameters that can be optimized to fine-tune the model (permittivity of wood cells, salinity of cell water, water content of wood, melting temperature).
The model was validated using winter-long measurement time series from Sodankylä, Finland. The data set includes above and below canopy L-band brightness temperature (TB) measurements of pine-dominated forest as well as reference measurements of soil, snow,
tree and air parameters. The data set covers canopy temperatures from below -30°C to above 20°C. The data set allows for simulating L-VOD with the new model from the reference measurements and comparing it to L-VOD calculated from below-canopy TB measurements.
Combined with brightness temperature modelling using 2 stream emission model (Schwank, Naderpour, & Mätzler, 2018), above-canopy TB can be simulated from reference measurements and validated using the above-canopy TB measurements.
References:
Li, Q., Kelly, R., Leppanen, L., Vehvilainen, J., Kontu, A., Lemmetyinen, J. and Pulliainen, J.: The
Influence of Thermal Properties and Canopy-Intercepted Snow on Passive Microwave Transmissivity of
a Scots Pine, IEEE Trans. Geosci. Remote Sens., 57(8), doi:10.1109/TGRS.2019.2899345, 2019.
Roy, A., Toose, P., Mavrovic, A., Pappas, C., Royer, A., Derksen, C., Berg, A., Rowlandson, T., El-Amine,
M., Barr, A., Black, A., Langlois, A. and Sonnentag, O.: L-Band response to freeze/thaw in a boreal
forest stand from ground- and tower-based radiometer observations, Remote Sens. Environ.,
237(November 2019), 111542, doi:10.1016/j.rse.2019.111542, 2020.
Schwank, M., Naderpour, R. and Mätzler, C.: “Tau-Omega” - and two-stream emission models used for
passive L-band retrievals: Application to close-range measurements over a forest, Remote Sens.,
10(1868), doi:10.3390/rs10121868, 2018.
Schwank, M., Kontu, A., Mialon, A., Naderpour, R., Houtz, D., Lemmetyinen, J., Rautiainen, K.,
Li, Q., Richaume, P., Kerr, Y. and Mätzler, C.: Temperature Effects on L-band Vegetation Optical
Depth of a Boreal Forest, Remote Sens. Environ., 263(15), doi: 10.1016/j.rse.2021.112542, 2021.
Although the Brazilian Amazon tropical rainforest represents a vital ecosystem for the globe, playing a key role as a biodiversity reserve and carbon storage, between 2016 and 2020 it lost approximately 43,300 km2 of forest cover. Deforestation is frequently associated with fires and illegal selective logging and is mainly motivated by rural settlement, beef production, crop planting, and large reservoirs of hydroelectric power plants [1]. The need to monitor and preserve this environment is well known, but the long rainy season and its continental size make complex this task. The Brazilian Amazon covers about 60% of the Brazilian territory; in this context, also satellite monitoring requires a certain level of automation. Currently, the Brazilian National Institute for Space Research (INPE) and the non-governmental MapBiomas organization provide annual deforestation maps, and both relies on optical satellite sensors.
This work presents an automatic method to discriminate deforested areas in the Brazilian Amazon rainforest based on a Neural Network (NN) classifier and Sentinel-1 SAR images, which allow monitoring in all weather conditions. NNs are Artificial Intelligence algorithms made up of non-linear processing units widely applied in classification and regression problems in many fields, including satellite remote sensing. They were also used for SAR images processing and forest mapping [2], [3], [4]. The proposed methodology uses multiple C-band Sentinel-1 images in VH and VV polarizations acquired over the same area of the southwest of Pará State in 2018 and 2019, for a total of 30 images per year. First, the time series of 2019 images was considered. With the aid of Sentinel-2 optical images, sample areas that remained forested and sample areas that were deforested during the year were collected for each SAR acquisition, for a total of about 300 examples. To analyse the annual trend of the VH and VV backscatter coefficient in the two cases, three statistical parameters were calculated for each forested and deforested sample considered in an image: mean, standard deviation, and maximum-minimum difference (MMD), i.e., the difference between the maximum and minimum value, of the backscatter coefficient. The analysis of the annual statistical trend revealed that when deforestation occurs the VV and VH backscatter signals are perturbed. In detail, after the deforestation event, the mean backscatter signal decreases by about 2 dB for both polarization; the decrease is evident for about three-four months thereafter. Measurements of statistical features throughout the year extracted from the forested and deforested sample areas were used to build the dataset for the input to the NN, trained to automatically detect the probability that an area was deforested. To have a dataset of sufficient size to train the NN, it was increased by generating synthetic data from the original data. Different case studies were considered to form the input vectors. First, the statistical parameters were involved individually, and then a combination of them was considered.
The NN capability achieved during this phase was applied to new input data: the trained model was used for the automatic recognition of areas deforested during the year 2018. Similar to the 2019 dataset, the latter comprised 30 Sentinel-1 images acquired over the same area reporting backscatter values for the VH and VV polarizations. However, in this case, each image of the time series was automatically divided into sub-images, i.e. patches, with a size of 10 x 10 pixels, corresponding to about 2 ha. To assess the NN performance, the results were validated using two ground truth images provided by the INPE and MapBiomas projects respectively.
The results were evaluated by means of accuracy, precision, recall, and F1 metrics. For the 2019 test dataset, the NN model achieved high performance in all case studies. In particular, the NN demonstrated the ability to discriminate between forested and deforested areas with accuracy and F1 score of 99%, with the mean, standard deviation, and MMD input set. This is due to the manual selection and labelling of the test areas employing the RGB composition of Sentinel-2 images. The performance decreases for the 2018 data set, achieving an accuracy and F1 score of 89% in the best scenario but in this case no manual labelling selection has been done. Investigations demonstrated that the misclassification may probably be caused by the coarse resolution of the ground truth images and the deforestation method. In fact, after clear-cutting, trees are left on the ground for drying before being burned making the backscatter values somehow similar to intact forest areas. The proposed method may be suitable for low cost and rapid forest monitoring in the Amazon rainforest and for assisting Brazilian environmental law enforcement agencies in combating illegal deforestation.
The attached figure shows the results obtained for a large deforested area of about 16 km2. The figures represent the (a) MapBiomas and (c) PRODES ground truth images and the difference in terms of representation details and spatial resolution can be observed. (b) and (d) represent the patch-based classification made by the MLP compared to the respective ground truth.
[1] C. Silva Junior, L. Aragão, M. Fonseca, C. Almeida, L. Vedovato and L. Anderson, “Deforestation-Induced Fragmentation Increases Forest Fire Occurrence in Central Brazilian Amazonia,” Forests, vol. 9, no. 6, 2018.
[2] S. Haykin, Neural Networks and Learning Machines, Third Edition, Pearson Prentice Hall, 2009.
[3] F. Del Frate, G. Schiavon, D. Solimini, M. Borgeaud, D. Hoekman and M. Vissers, “Crop classification using multiconfiguration C-band SAR data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 7, 2003.
[4] V. Laurin, V. Liesenberg, Q. Chen, L. Guerriero, F. Del Frate, A. Bartolini, D. Coomes, B. Wilebore, J. Lindsell and R. Valentini, “Optical and SAR sensor synergies for forest and land cover mapping in a tropical site in West Africa,” International Journal of Applied Earth Observation and Geoinformation, vol. 21, 2013.
BorealScat-2 is a new long-term radar tower experiment for studying temporal variations in forest radar measurements in northern Sweden. The main goal of the experiment is to study the relationship between radar observations and forest water dynamics, in particular evapotranspiration. Evapotranspiration plays a central role in the Earth’s carbon, water and energy cycles and is closely related to drought-induced tree mortality. Despite its importance in understanding our climate system and how forests will respond to climate change, global observations of evapotranspiration at fine spatial and temporal resolutions are currently lacking. Radar observations are directly sensitive to the spatial distribution of water in forests. By establishing a relationship between radar observations and variables contributing to evapotranspiration, spaceborne synthetic aperture radar (SAR) could contribute towards high spatial and temporal resolution of evapotranspiration observations. The knowledge gained from this experiment is expected to open up new data exploration possibilities for existing spaceborne SAR missions as well as reveal opportunities for future missions.
The experiment is located in the Svartberget Experimental Forest, 60 km northwest of Umeå in northern Sweden, in the centre of the internationally uniquely well-instrumented Krycklan Catchment boreal forest landscape, with long-term hydrological, atmospheric and ecological measurements. The Svartberget Experimental Forest also hosts measurement stations of the pan-European research infrastructures ICOS (Integrated Carbon Observation System) and ACTRIS (Aerosols, Clouds and Trace Gases Research Infrastructure). Furthermore, Svartberget is part of SITES (Swedish Infrastructure for Ecosystem Science), offering a unique infrastructure for land-based climate, environmental and ecosystem research. The co-location with these research infrastructures provides unique opportunities for cutting edge science on the interactions between forest ecosystems, water, carbon and heat exchange, atmosphere chemistry, aerosols and cloud formation. Sensors such as micrometre point dendrometers and sap flow sensors provide tree-level observations of water content and flow.
The radar tower that was used for ESA’s BorealScat campaign (2016-2021) at the Remningstorp test site in southern Sweden was upgraded and relocated for BorealScat-2. The experiment features the first tower-based full 3D tomographic SAR at P- and L-band, implemented by mechanically steering an antenna array over a 4 m aperture at a height of 50 m. Like previous ground-based radar tomography experiments, the array alone is capable of producing a rapid 2D vertical backscatter cross section of the forest. BorealScat-2 features fine resolution in the azimuth direction by mechanically moving the array horizontally, whereby the 3D backscatter distribution is observed. Apart from forest structure observations, a high number of independent samples is obtained, allowing small changes in backscatter and temporal coherence to be accurately quantified. Such accuracy is necessary for observing small changes in radar signatures, which can be caused by changes in the tree water content, appearing as a diurnal cycle during the summer. These cycles are closely related to the vegetation-atmosphere water exchange, water vapour pressure deficit, soil moisture content, tree water status and tree vitality. At longer timescales the effects of rainfall, snow, soil moisture and forest stand changes on radar observations will be studied.
As in the previous campaign, BorealScat-2 features multi-polarization radar observations at P-, L- and C-band, supporting ESA’s BIOMASS, ROSE-L and Sentinel-1 missions, JAXA’s ALOS-2/-4 missions and NASA/ISRO’s NISAR mission. A new radar system extension also provides multi-polarimetric tomographic observations at X-band in response to the success of the TanDEM-X/PAZ constellation and commercial initiatives for providing X-band SAR imagery with short revisit times. The temporal evolution of forest backscatter and temporal coherence will be studied over timescales of less than a second, offering data that can be used to characterise the performance of possible future bistatic single-pass interferometers.
The boreal forest site under observation at high northern latitudes represents ecosystems expected to undergo some of the most rapid changing climate in the coming years. Being characteristic of colder climates, this forest is expected to undergo significant stress during the summers. BorealScat-2 offers an opportunity for observing these effects of climate change and supporting a new generation of forest remote sensing applications.
Climate change is increasing occurrences of forest disturbances in Europe, compromising their ability to provide services at a time when there is an increasing demand for wood, carbon uptake needs to be enhanced and biodiversity needs to be protected . Sound forest management that can prevent and mitigate some of the negative effects of forest disturbances, hinges on the ability to quickly detect disturbances. Several time-series based near real-time (NRT) disturbance monitoring methods have been published, but few have been validated in Europe. Furthermore they can not be compared easily, since they were implemented using APIs in different programming languages, and their computational efficiency for operational use has not been evaluated.
We implemented five NRT monitoring algorithms, Continuous Change Detection and Classification (CCDC), bFastMonitor (CuSum and MoSum), Exponentially Weighted Moving Average (EWMA) and a simple NRT algorithm using the interquartile range (IQR) in an optimized and scalable Python package with a common interface to facilitate the application and comparison of the algorithms. We validated all five algorithms using the same data set, which covers four study areas in Europe, each with a different predominant stand-replacing disturbance type. These are: Wildfire in Portugal, spruce bark beetle infestation in Poland, logging in Sweden, and windthrow in Austria. We carried out a grid search to quantify the impact of different hyperparameters on the monitoring performance and to find best performing hyperparameters. The measure of performance used to compare algorithms and different hyperparameter combinations is based on a custom-made time-weighted F score that simultaneously accounts for spatial accuracy and the disturbance detection lag. The monitoring was done using Sentinel 2 L2A data.
Our analysis showed that with a time-weighted F score of 0.35 the simple NRT algorithm IQR was the most versatile over all study areas, while CCDC (F score 0.33) performed well only for areas with severe disturbances. EWMA, CuSum and MoSum performed well in areas with less severe disturbances with F scores of 0.19, 0.21, 0.28 respectively. Noise in the time series due to residual clouds or cloud shadows impacted the monitoring performance especially in areas with frequent cloud cover. For those areas, algorithms that implement screening of outliers are an advantage. In general, the used vegetation index impacted the performance of the monitoring the most, with wetness based indices like NDMI and the Wetness dimension of a tasseled cap transform outperforming greenness-based indices like NDVI in most cases. The benchmarking of the algorithms showed that with computationally cheap methods, diverse stand-replacing forest disturbances can be detected across Europe from optical satellite data, like Sentinel-2, in a robust, versatile, and accurate manner.
European forests are important to European society for wood provision, CO2 sequestration, and biodiversity conservation (Köhl, Linser, and Prins 2020). Climate change, including extreme climate events (e.g. droughts, storms, fires), is leading to increased disturbances of European forests (Bastos et al. 2020; McDowell et al. 2020; Schelhaas, Nabuurs, and Schuck 2003; Seidl et al. 2017) and threaten the multiple services that forests provide to society (Ceccherini et al. 2020; Palahí et al. 2021; Sebald, Senf, and Seidl 2021). To better manage European forests in such an uncertain future, good disturbance monitoring tools are required, but these are not yet operational.
Two knowledge gaps hinder the appropriate monitoring of disturbances regimes in European forests. First, current monitoring efforts are largely based on National Forest Inventories (NFI’s), providing ground-based information on forest dynamics from 500,000 sample points across Europe measured at 5-10 year-intervals. Although accurate, these NFI data are coarse, infrequent, unharmonized, and not always accessible (Gschwantner et al. 2016; Herold et al. 2019; Moreno, Neumann, and Hasenauer 2017). Second, such remote-sensing detection methods should cover the necessary spatial and temporal resolution, using satellite data with the capability to distinguish between different sources of disturbance. However, such methods have not yet been developed and cannot be applied yet at European forest scale.
High-resolution satellite data, both optical- and radar-based, such as provided by the Sentinel-1 and Sentinel-2 missions, is often used for large-scale monitoring of forest disturbances, especially in humid tropical forest settings (Reiche et al. 2016). However, disturbance monitoring in European forest ecosystems is relatively unexplored in comparison, as seasonality and the existence of a wide variety of forest types make the development of a generic approach difficult. Furthermore, remote-sensing based efforts undertaken so far to characterize disturbances in European forests have made little use of valuable ground-based data streams such as NFI’s, which has led to controversial conclusions and heated debate (Ceccherini et al. 2020; Palahí et al. 2021).
Our objective is therefore to “bridge the gaps” between cyclic ground-based forest inventories and rapid satellite-based forest monitoring to realize consistent and frequently updated European forest information. We will develop, test, and apply an integrated remote-sensing- and ground-based forest inventory system for detecting, distinguishing, and classifying natural and human-made disturbances at a greater temporal and spatial resolution than ground-based inventories alone can facilitate, thereby contributing to European forest policy.
To this end, an existing system for rapid change monitoring in the tropics using 10m resolution Sentinel-1 derived satellite imagery (Reiche et al. 2021) will be upgraded for European forest ecosystems and integrated with Sentinel-2 imagery and tested for three case study locations (The Netherlands, Germany and Italy). Since in particular optical remote sensing data are highly sensitive to the phenological cycles of temperate forests (Reed et al. 1994; Testa et al. 2018), the Sentinel data will first be deseasonalized before change is detected.
The next step will be to identify and validate the source of observed forest disturbances (drought, fire, storm, pest, disease, logging). This will done by training a model to classify forest disturbances in the case study locations based on known occurrences of different disturbance sources, available through ground-based networks such as FORWIND, the European Forest Fire Information System (EFFIS), Database of Forest Disturbances in Europe (DFDE), national forest inventories (Forzieri et al. 2020; Patacca et al. 2021; San-Miguel-Ayanz et al. 2012), and information contained in satellite images.
Finally, the forest disturbance detection and classification methodology will be used to “bridge the gaps” of cyclic NFI inventory observations to synthesize more frequent (sub-annual) statistics on European forest disturbance patterns, intensities, and sources. The inventory moments in the NFI cycles of the case study locations will be used to calibrate the statics derived from satellite-based monitoring. The output will be a methodological framework to derive NFI-calibrated sub-annual remote sensing disturbance statistics for European forests.
This research will be carried out in the context of a starting PhD project. Initial results of the first sub-objective, namely the Sentinel-1 and Sentinel-2 based European forest disturbance monitoring for several case study locations will be presented.
Bastos, A. et al. 2020. “Direct and Seasonal Legacy Effects of the 2018 Heat Wave and Drought on European Ecosystem Productivity.” Science Advances 6(24): 1–14.
Ceccherini, Guido et al. 2020. “Abrupt Increase in Harvested Forest Area over Europe after 2015.” Nature 583(7814): 72–77.
Forzieri, Giovanni et al. 2020. “A Spatially Explicit Database of Wind Disturbances in European Forests over the Period 2000-2018.” Earth System Science Data 12(1): 257–76.
Gschwantner, Thomas et al. 2016. “Comparison of Methods Used in European National Forest Inventories for the Estimation of Volume Increment: Towards Harmonisation.” Annals of Forest Science 73(4): 807–21.
Herold, Martin et al. 2019. “The Role and Need for Space-Based Forest Biomass-Related Measurements in Environmental Management and Policy.” Surveys in Geophysics 40(4): 757–78.
Köhl, Michael, Stefanie Linser, and Kit Prins. 2020. State of Europe’s Forests 2020. FOREST EUROPE. http://rgdoi.net/10.13140/RG.2.2.12881.76643 (November 25, 2021).
McDowell, Nate G. et al. 2020. “Pervasive Shifts in Forest Dynamics in a Changing World.” Science 368(6494).
Moreno, A., M. Neumann, and H. Hasenauer. 2017. “Forest Structures across Europe.” Geoscience Data Journal 4(1): 17–28.
Palahí, Marc et al. 2021. “Concerns about Reported Harvests in European Forests.” Nature 592(7856): E15–17.
Patacca, Marco, Mart-Jan Schelhaas, Sergey Zudin, and Marcus Lindner. 2021. Database on Forest Disturbances in Europe (DFDE)- Technical Report.
Reed, Bradley C. et al. 1994. “Measuring Phenological Variability from Satellite Imagery.” Journal of Vegetation Science 5(5): 703–14.
Reiche, Johannes et al. 2016. “Combining Satellite Data for Better Tropical Forest Monitoring.” Nature Climate Change 6(2): 120–22.
———. 2021. “Forest Disturbance Alerts for the Congo Basin Using Sentinel-1.” Environmental Research Letters 16(2).
San-Miguel-Ayanz, Jess et al. 2012. “Comprehensive Monitoring of Wildfires in Europe: The European Forest Fire Information System (EFFIS).” In Approaches to Managing Disaster - Assessing Hazards, Emergencies and Disaster Impacts, InTech. http://forest.jrc.ec.europa.eu/team/person/4/detail/.
Schelhaas, Mart-Jan, Gert-Jan Nabuurs, and Andreas Schuck. 2003. “Natural Disturbances in the European Forests in the 19th and 20th Centuries.” Global Change Biology 9(11): 1620–33.
Sebald, Julius, Cornelius Senf, and Rupert Seidl. 2021. “Human or Natural? Landscape Context Improves the Attribution of Forest Disturbances Mapped from Landsat in Central Europe.” Remote Sensing of Environment 262(November 2020): 112502.
Seidl, Rupert et al. 2017. “Forest Disturbances under Climate Change.” Nature Climate Change 7(6): 395–402.
Testa, S., K. Soudani, L. Boschetti, and E. Borgogno Mondino. 2018. “MODIS-Derived EVI, NDVI and WDRVI Time Series to Estimate Phenological Metrics in French Deciduous Forests.” International Journal of Applied Earth Observation and Geoinformation 64: 132–44.
Beavers are the biggest natural ecosystem engineers present in Estonia. Our numerous small streams connected to ditch network and large forest cover is suited well for beaver habitats. Beavers build dams across rivers, ditches and streams thus impeding natural water flow, which causes flooding. The forests and vegetation on and around the flooded area changes due to the new conditions.
Beavers create complex earthworks for their habitats. With the dams, lodges, flooding, felling of trees, tunnels, and canals the effects on the landscape varies in different areas of the beaver habitat. The beaver habitat impact is often large enough to be detected from space using medium spatial resolution optical Earth observing satellites such as Sentinel-2 (from ESA) or Landsat-8 (From NASA). Changes in the vegetation spectral reflectance signature, measured by these satellites, can give us information about beaver activity on the ground. Using remote sensing methods, we can detect and map beaver habitat locations and monitor their conditions efficiently over large areas.
Looking from above it is clear that the beaver affected area is very heterogeneous, thus the spectral signatures detected from them vary greatly even within flooded area around one beaver colony. But even the average values for areas affected by beavers and healthy forest show great differences. Furthermore, the differences in forests with different dominant tree species are quite small.
These differences in the average values can be very useful for detecting beavers. In Estonia beaver populations are estimated using labor intensive field work but machine learning could be used to help detect beavers over large areas quite effectively. Using the spectral signature, as well as the calculated indices, a good predictive model can be constructed. It seems that the calculated indices are useful to detect the differences between areas that are affected by beavers and areas that are not.
Forestry is a regionally dominant land use but the carbon and climate impact of boreal forest management is not well characterized, partly because no large-scale maps of managed and primary unmanaged forests exist. Northern land has greened considerably during the last decades and there is evidence of the growing dominance of the boreal biome in the global land sink of carbon. However it remains unclear what drives the observed changes, if it is management or environmental changes.
In this project we have mapped and classified the naturalness of more than 400 primary forests in Sweden. The primary forests are found from the temperate south to the boreal north of the country, and managed secondary forests are identified close to each primary forest forming spatial pairs of primary and secondary forests that share climate and landscape history but not management. The primary forests represent a natural baseline of ecosystem states and changes. Changes in these forests are driven by regional to global environmental change such as elevated CO2 concentrations, warming or nitrogen deposition. The managed forests on the other hand are influenced by both environmental changes and management. By studying the states and changes in the primary forests we can gain new knowledge in how ecosystems would have changed and what their states would have been with no management. By contrasting changes and states in the managed ecosystems we can estimate the long-term influence of management on both states and changes.
In this project the maps are used in combination with more than 100k forest inventory plots, targeted field sampling and remote sensing to understand, (i) changes in remotely sensed greening since 1984, and (ii) differences in carbon storage. Our results suggest that primary forests have greened considerably since 1984, and at mature ages, faster than managed forests do, and store more carbon than the paired managed forests do.
The German Weather Service (Deutscher Wetterdienst, DWD) provides a forest fire danger index called WBI that is calculated from meteorological and site information at several hundred sites across Germany. It is based on the Canadian Fire Weather Index. The WBI is published daily from March to October and comprises five classes between 1 (very low fire danger) to 5 (very high fire danger). The WBI gives a solid estimate of fire danger on a regional basis, but it is not tailored to make predictions at the forest stand level. The spatial resolution corresponds to several kilometers and information on site condition is not spatially explicit but interpolated from site data. Hence, fire danger can represent a worst-case scenario for some stands, but a best-case scenario in others.
In the presented project, we propose strategies to improve fire danger predictions by combining meteorological indices such as the WBI with forest information derived from satellite remote sensing. To achieve this objective, we developed several input data sets for two regions in Germany based on Sentinel-1, Sentinel-2, Landsat, and lidar data. Firstly, as invariant information that will be updated on a yearly basis, we derive information on tree species and forest structure. A tree species map with a spatial resolution of 10 m is created using Sentinel-2 time series and reference data from the state forest administrations. Since fire danger differs greatly between species (the highest danger in German forests exists for Scots pine), such a classification is crucial. Information on forest structure is derived from airborne laser scanning data and from stereo photogrammetry. Secondly, high-resolution satellite-based estimates of leaf water content will be derived from Sentinel-2 and Landsat data with an update cycle between a few days and several weeks, depending on cloud cover. These will be combined with meteorological input data, e.g. precipitation radar measurements to assess the fuel moisture content. Thirdly, as a temporal and spatial high-resolution component, Sentinel-3 and Sentinel-1 data will be used as complementary data sources for estimating fuel moisture content. Finally, we derive indicators of fire predisposition and susceptibility based on the fire history of a pixel and its neighbors. To derive the fire history, we use Landsat time series to map burned areas annually for the past thirty years.
Although to date larger fires are rare and usually quickly contained in Germany, satellite remote sensing is a useful tool for mapping fire history and fire danger, especially because fire danger is expected to continue rising due to climate change.
Forests shape the face of the earth and cover over 30% of the land surface. Changing climate and increases in extreme events, such as windthrow, fire, parasites and drought have highlighted the vulnerability of our forests. It is therefore necessary to assess forest health at a sufficient spatial and temporal resolution. Traditional, ground-based, forest monitoring offers valuable insights into forest biomass stock. However, due to its typically low temporal resolution and area coverage, it does not portray changes in forest health conditions adequately. Remote sensing of forests has opened new pathways, but is still challenged to provide high resolution information on forest health conditions and tree mortality. A way forward could be the use of existing aerial imagery, like orthophotos, which allow large spatial coverage with a high resolution, needed to assess the dynamics of forest canopies including tree mortality, in detail. To map canopy mortality at such a large scale we need to develop automatic tools that can identify forest conditions.
Here we used aerial images to explore the effects of the 2018 and 2019 summer droughts on temperate forests in Luxembourg. To this end, we utilized a Convolutional Neural Network (CNN) for image segmentation and classification. Our initial data set consists of ~ 9,000 manually mapped polygons of dead canopy cover, that were acquired for the years 2017 and 2019. The objectives of this project are (1) to identify a U-Net CNN architecture that renders the best results with our small data set, (2) to train the a model to identify canopy mortality in orthophotos of Luxembourg for all years, and (3) to determine the area and biomass of trees that are dead for each year.
We found that the EfficientU-Net++ architecture outperformed conventional U-Net architectures in prediction accuracy and resource efficiency. The test set dice score of the trained model was 0.76 (excluding background pixels), indicating a good performance. Preliminary results suggest that 550 ha of forest canopy that were classified as alive in 2017 were classified as dead in 2019, 86.5% of which were coniferous and 13.5% were deciduous trees. In summary, the combination of using ortho imagery and CNNs appears to be a promising tool for obtaining annual forest dynamics, particularly canopy mortality at high spatial resolution. Our results are a valuable resource for further research on the driving factors of forest mortality.
Tropical forest extent monitoring and mapping are in the center of ecosystem preservation of the 21st century. Satellite remote sensing has proven to be essential to assess the deforestation in wide and remote areas at regional scale. Nevertheless, annual-based assessments using optical images tend to show their limits essentially in the African tropical forest where the semi-persistent cloud coverage prevent short duration small-scale events to be properly spotted. The 7-year old C-band SAR data from Sentinel-1 provides now a proper archive to tackle the challenge of continuous forest monitoring, thanks to one acquisition every 12 days in all tropical regions of the world. Recent early-warning systems have showed the interest of using SAR data to reduce the temporal scale of sensing deforestation events. Most of the time, those systems trigger alarms at every acquisition, but require mid-term period to confirm their sensing.
We introduce an autonomous system using Sentinel-1 C-band SAR data monitoring and mapping deforestation in the Congo Basin, down to 1-month of temporal resolution at 10-meter. The model is fully functional as soon as the acquisitions have been preprocessed and does not need further acquisition to confirm forest loss.
The model is trained with trimestral features extracted from 6 to 7 acquisitions of normalized S1 images. Amplitudes and coherences provide mean, median, standard deviation and extremum quantile values for every pixel and determine whether its value indicates temporary land cover change from forest cover to bare soil. The model indicates mean F1-scores for trimestral binary classifications (stable or disturbed forest) of 76.85% (± 15.4% with one standard deviation confidence interval). As a second step, the monthly changes are identified by classifying the same 3-month features, offset every months.
The initial training samples to calibrate the model were delineated in the surroundings of Pokola in the Republic of the Congo. The mapping assessment reveals small-scale events of road construction, slash and burn farming and selective logging, confirming the operability of the model through various typologies. The application of the model in the entire Equatorial Guinea is ongoing and still needs to be validated.
Tree Cover Density is an essential information to define forest areas, as it is one of several indicators used to discriminate forest areas under a national forest definition. It has a wide applicability in the context of sustainable forest management (SFM), for the Reducing Emissions from Deforestation and Forest Degradation (REDD+) policy and for monitoring forest resources globally. Within the Horizon 2020 funded REDDCopernicus project the capacities and requirements for a future REDD+ and Forest Monitoring Service Component within the European Copernicus Land Monitoring Service were evaluated. The main geo-spatial products that were considered in the design included Sentinel-2 Annual Composites, Tree Cover Density, Tree Cover Presence & Seasonality, Tree Cover Presence Change and Tree Cover Disturbance. The developments undertaken to improve the workflow and accuracy of TCD mapping led to a set of tools, which are described quantitatively and qualitatively hereafter.
Supervised machine learning in the field of remote sensing (including TCDs), generally requires input data from two conceptual groups. Firstly, supervised training data (ground truth data) is needed to train a classifier or regressor and secondly, remotely sensed image data is needed for undertaking the prediction. In this study, Copernicus Sentinel-2 imagery is considered an excellent choice as input for prediction due to good temporal data availability, large area coverage and its free availability. Delineating TCD ground truth data on Sentinel-2 imagery however is more challenging due to its relatively low spatial resolution and subsequent difficulty to easily delineate tree or forest boundaries. Very High Resolution (VHR) imagery is thus needed for collecting ground truth data. Apart from the two groups of input data (Sentinel-2 for prediction, VHR for ground truth data creation), other important aspects to consider for an operational system are; machine learning algorithms (regression), their sensitivity to training sample sizes and quality and tuning algorithm meta-parameters. A final component for an operational system would be user-interaction and software engineering elements.
In this study a self-contained operational system is presented to conduct TCD mapping on a regional to national scale. The system includes components for generating training samples directly from Web Map Service (WMS) sources, tuning and running regression models and downloading and predicting on Sentinel-2 multitemporal image stacks. These components are all amalgamated within three QGIS plugins (current development, a singular plugin is foreseen). The system is designed to be driven by domain experts. The only external requirement after the freely available QGIS software and the plugins is an internet connection.
A WMS training data collection strategy is presented and profiled. For supervised machine learning approaches within remote sensing, the norm is typically to collect training data on VHR imagery sourced from commercial/non-commercial image providers. The image sourcing step typically adds additional time and resource investments, or adds to project and organisational overhead. WMS raster satellite data sources are ubiquitous in web mapping software or desktop GIS applications. The spatial resolution of these image mosaics is commonly high, as they are compiled from VHR satellite imagery, often having sub-meter spatial resolutions. WMSs are designed mainly for visual aid, however physical data access is also possible, which greatly broadens its usefulness and integration capabilities into operational systems.
A machine learning extension to the GAFSEG image segmentation QGIS plugin [1] is presented here to generate samples for TCD generation. In short, local image regions within a WMS source scene is segmented with an alpha/wglobal tree [2], with some value-added object attributes calculated [3]. A user tunes the algorithm such that no under-segmentation is present, with some over-segmentation allowed. Next, positive and negative samples are interactively, and iteratively, collected with feature selection tools over the segmented vector layer(s). Catboost [4], or another meta-optimised classifier [5] is used to classify the local image region (not regression yet). A user thus collects several such localised regions of binary tree maps for the greater study area, using a hand-crafted combination of image segmentation and classical machine learning. Such an approach allows for more precise boundary delineations and classification accuracies compared to generating samples via digitising or alternatively a pixel-based classification approach [6]. In addition, the developed user interface allows for fast iteration on sample collection. Domain experts verify that no significant changes occurred on the localised sampling regions compared with the subsequent Sentinel-2 data used for classification.
After sample collection (VHR binary vegetation maps), a user downloads multitemporal Sentinel-2 data stacks with the streamlined Sentinel-2 downloader tool. The tool allows for visual selection of areas of interest, data previewing and provides a swath of options to tailor the data sourcing. The tool additionally calculates value added derivates from the temporal data stack, e.g. temporal features, kernel filters and latitude/longitude attributes. The VHR ground truth data is subsequently down-sampled to match the 10 by 10-meter spatial resolution of the Sentinel-2 imagery. Down-sampling the ground truth data results in 10m x 10m cells representing percentage tree cover. A Catboost regression model is then built using the down-sampled ground truth data and Sentinel-2 time features as the independent and dependant variables respectively. The trained Catboost model is the applied to the whole study area. In addition, knowledge uncertainty is generated [7]. A domain expert may use this uncertainty information for generating targeted, additional training samples and refining the produced TCD map.
A major consideration for such an operational system is the number of training samples required, and its scalability relative to TCD accuracy. Three common regressors are compared with 20 different sets of training sample sizes in terms of achieved accuracies and amalgamated time to do the sampling. The derived indicators may guide operators in the number of samples commonly required (but do not consider geographical diversity). The proposed approach is demonstrated for mapping dry forests in study areas in Mozambique and Paraguay.
References
[1] M. Probeck et al., "CLC+ Backbone: Set the Scene in Copernicus for the Coming Decade," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 2076-2079, doi: 10.1109/IGARSS47720.2021.9553252.
[2] Ouzounis, G.K. Segmentation strategies for the alpha-tree data structure. Pattern Recognition Letters. 2020, 12, 232-239.
[3] C. Song, F. Yang and P. Li, "Rotation Invariant Texture Measured by Local Binary Pattern for Remote Sensing Image Classification," 2010 Second International Workshop on Education Technology and Computer Science, 2010, pp. 3-6, doi: 10.1109/ETCS.2010.37.
[4] Dorogush, Anna Veronika, Vasily Ershov, and Andrey Gulin. "CatBoost: gradient boosting with categorical features support." arXiv preprint arXiv:1810.11363 (2018).
[5] Wang, Chi, et al. "FLAML: A Fast and Lightweight AutoML Library." Proceedings of Machine Learning and Systems 3 (2021).
[6] Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.Q.; van der Meer, F.; van der Werff, H.; van Coillie, F., et al. Geographic object-based image analysis – towards a new paradigm. ISPRS Journal of Photogrammetry and Remote Sensing. 2014, 87, 180-191.
[7] Malinin, Andrey, Liudmila Prokhorenkova, and Aleksei Ustimenko. "Uncertainty in gradient boosting via ensembles." arXiv preprint arXiv:2006.10562 (2020).
Acknowledgements
This work was supported by the European Union’s Horizon 2020 Work Programme 2018–2020
Leadership in Enabling and Industrial Technologies – Space, Coordinated Support Action under Grant
Agreement No. 821880.
Monitoring forest disturbance in near real-time (NRT) is essential for mitigating deforestation. Recent advances have improved the ability of NRT algorithms to detect forest disturbances, but two key issues remain. First, many multi-source approaches rely solely on optical data, which can be a problem in tropical forests with persistent cloud cover. Second, binary approaches contain no information about the confidence of the detection, which can be problematic for ground-truthing campaigns. Additionally, there is little discourse on the tradeoff between the sensitivity to and confidence in disturbance detections, which can be crucial in a monitoring context.
We propose a novel NRT method that employs multi-source satellite data (Landsat-8, Sentinel-2, Sentinel-1) to create daily probability maps of disturbance across a forested landscape. Instead of fusing the data, we use a simple aggregation with exponentially-weighted moving average (EWMA) of time series residuals. Probabilities of disturbance are then calculated using a logistic function of classified, weighted residuals. Weights for the EWMA are determined via multiobjective optimization over three metrics calculated from the resultant probability: temporal latency, false positive rate, and false negative rate. With training data from north-central Myanmar, we apply our method to a 10 m pixel grid in Chatthin Wildlife Sanctuary after a 100-fold cross-validation. Preliminary results show a definite range, where median metric values range from 2 / 0.0338 / 0.5517 to 7 / 0.0066 / 0.1379 for latency, false positive rate, and false negative rate, respectively.
Thus, our method improves NRT monitoring in two ways. The use of Sentinel-1 first allows us to retain observations regardless of cloud cover, which heavily contribute to the creation of a daily probability time series. Second, our simple aggregation quickly combines disparate data, and our use of probabilities yields confidence measures of each disturbance while demonstrating a clear tradeoff. Depending on user specification, the optimized weights can either favor sensitivity to detection - prioritizing shorter latencies but giving higher false detections - or favor confidence - conversely, prioritizing low false detections but having higher latencies. We envision this combination of a simple multi-source approach with optimized probabilities will enhance further NRT monitoring.
In the last 20 years several extreme events – most likely as a consequence of the climate crisis - such as drought periods or heavy storms occurred in Germany with severe secondary damages in forest areas. Bark beetle calamities increased after 2003, 2006, 2015 and 2018-2020 drought periods as this is a secondary damage which is often prevalent in already stressed forest areas. These events are predicted to occur with a higher frequency and longer duration. As a result, secondary damages are expected to be substantially more pronounced in the future and especially in mono-culture forest stands.
Within the Helmholtz knowledge transfer project “forest condition monitor” one of our aims is to derive deviations from normal, i.e. healthy, conditions of forest areas based on temporal changes in the signal captured by satellites. This is useful in the context of anticipatory silviculture and provides valuable information for policy makers. Essential for deriving deviations from normal conditions is the knowledge about the spectral normal condition. Especially Norway spruce and Scots Pine revealed high impact of secondary damages in the last 20 years in Germany.
In this analysis our aim is to extract the spectral seasonal evolution from several Norway Spruce and Scots pine stands using MODIS NDVI (Normalised Difference Vegetation Index) time series which come at a spatial resolution of 250m and temporal resolution of 1 day (at mid latitudes) for NDVI relevant spectral bands. Preselected forest areas with knowledge about the time and cause of disturbance allows us to focus on the retrospective analysis in these areas, i.e. discriminate between healthy and disturbed forest stands/tree species.
Time series data of these areas will be used for trend analysis using BFAST algorithm. This algorithm decomposes a time series signal into seasonal, trend and remainder which allows a derivation of gradual or abrupt changes. Breakpoints and following downward gradual changes provide an indication for forest disturbances. As disturbance area and time are already known this can be used for detecting the most likely normal condition within this forest stand before gradual changes. The MODIS NDVI time series facilitate the detection of the normal condition whereas Landsat and if available Sentinel-2 images with 30m and 20m spatial resolution, respectively allow an additional focus on the normal condition. This enables us to derive the spectral information of the detected normal condition for all 4 growing seasons with high spatial and spectral resolution for two different tree species and various growing areas.
Additionally using the knowledge about the occurring disturbance the characteristics of gradual changes will be analysed. Their length and declining angle can differentiate the main cause of disturbance as already stated in literature using Landsat and Sentinel-2 data. Here we want to test if MODIS time series are appropriate for this approach. If successful we can apply this approach for delineating forest disturbances in unknown forest areas for deriving normal condition without prior knowledge. Here we want to present our preliminary results and discuss them with the scientific community.
Volcanic eruptions can damage or destroy surrounding forest, with the potential to alter its characteristics in the long term. The impact of eruptions on forest has not been systematically studied with satellite data, although individual studies have demonstrated that explosive eruptions in particular produce an impact measurable from satellites. The impact of an eruption and the rate of forest recovery both depend on eruption characteristics, such as temperature, volume and spatial distribution of ejected material, as well as the ecological setting. Here, we explore methods using both optical and radar satellite data to map the impact of the 2015 eruption of Calbuco volcano (Chile) and the surrounding forest’s continuing recovery.
The nature of damage to vegetation caused by a volcanic eruption depends on the eruption style, magnitude and duration. Large explosive eruptions cause intense damage in the near-field through mechanisms including pyroclastic density currents and lahars, while more extensive but less destructive impacts are caused by distal tephra fall deposits. The 2015 eruption of Calbuco provides an illustrative recent example of such processes. The eruption started on the 22nd of April and consisted of three explosive episodes between the 22nd and 23rd of April producing large buoyant ash plumes, pyroclastic flows and lahars. These disturbed the temperate broadleaf forests around Calbuco up to 15 km away from the eruption centre.
Here, we use the 2015 Calbuco eruption to assess the suitability of time series derived from optical and radar imagery for tracking initial impacts on vegetation and rates and patterns of forest recovery. Explosive eruptions caused a drop in normalised difference vegetation index (NDVI) produced from Landsat 8 imagery, that persists in some areas until today, correlating with flow deposits and ash fall distribution. In addition, we use Sentinel-1 radar imagery, which is not restricted by cloud coverage, to produce time series of radar backscatter and phase coherence. We will develop approaches to tracking the impact of volcanic eruptions on forests with remote sensing data that can be applied globally using freely available data, in different ecosystems and for different styles of eruption. Our eventual aim is to develop a toolkit for identifying the footprint of past volcanic eruptions on forested environments.
Sentinel-2 time series provide detailed information on forest stand properties, such as species composition or age. Although they have been used in mapping species in many studies so far, the vast majority of them are either conducted for smaller regions or classify only broad forest classes for larger areas. Here, the approach to classify dominant tree species using the Forest Data Bank database containing information of stands species composition for the Polish State Forests in the entire country, and time series of Sentinel-2 processed in Google Earth Engine (GEE) is proposed. The analysis was performed for the entire area of Poland covering more than 300 thousand km2, with a forest cover of approx. 30%. GEE, which is a freely accessible cloud-based platform, creates the possibility to compute classification for large areas and, combined with the use of high-resolution Sentinel-2 data, creates unprecedented opportunities in mapping forest species. In this study, Sentinel-2 composites, specifically Spectral-Temporal Metrics (STMs), were generated in GEE for the study area and the classification was performed using Random Forest algorithm. Results show that the accuracy of the country-wide forest stand species mapping exceeds 80%, which is a very promising result taking into account the large size of the study area. STMs, particularly average values of cloud-free pixels calculated separately for spring, summer, and autumn seasons contributed to the high classification accuracy. However, challenges still persist, which result in misclassifications – for example, the very large size of the study area is associated with problems in generating cloud-free, high-quality pixels. Frequent cloud contamination limits the possibilities to obtain representative pixels, and, furthermore, as the study area spreads across many relative orbits of Sentinel-2, it implicates that for some parts of Poland, imagery is acquired once in every two or three days, while for others - once in every five days. Also, producing reliable training data, and, at the same time, classification of less common tree species, remains difficult.
Remote sensing methods provide non-destructive and repeatable observations of the state and evolution of the various ecosystems of our planet Earth. Among the various sources of remote sensing data, satellite-based passive optical systems such as Landsat and Sentinel-2 offer a valuable source of information, especially for terrestrial ecosystems, due to their rich spectral coverage of different wavelengths and the long time series of observations, which in the case of Landsat dates back to the early 1980s. The presence of spectral channels covering the near- and mid-infrared regions allows, for example, direct acquisition of quantitative vegetation parameters such as chlorophyll and water content in the foliage or leaf area index. However, obtaining these parameters may be difficult due to the heterogeneous nature of the vegetation (e.g. for mixed forest stands), specific phenological trends of vegetation within a single growing season, and differences in spatial or spectral resolution between Landsat (TM, OLI) and Sentinel-2 (MSI) sensors. NASA GSFC is developing a so-called Harmonized Landsat-Sentinel-2 (HLS) product that brings together observations from both of these systems to produce one seamless dense time series of observations, further improving our ability to analyze vegetation time series. An important aspect in the development and validation of any quantitative remote sensing product is to validate it against a ground survey and determine the product's errors.
The aim of this study is the spatial mapping of selected quantitative parameters of highly heterogeneous floodplain forest stands (5 tree species) using advanced machine learning methods implemented in ARTMO toolbox. A large database of ground-based surveys (7 campaigns covering different phenological phases of vegetation on 18 plots) was used to develop different machine learning models over a time series of satellite observations from 1) Landsat 8, 2) Sentinel-2 and 3) an HLS product combining both systems.
This allowed us to determine the accuracy of the quantitative vegetation parameter retrieval and the robustness of the trained models (i.e. their application to different tree species and their phenological phases). Assessing the potential of Landsat 8, Sentinel-2, and HLS time series to retrieve these products with respect to their spatial and temporal resolution is also considered an integral part of the study. By doing so, we will be able to recommend an appropriate dataset of time series observations for each of the selected quantitative vegetation parameters and study the benefits of improved spatial (Sentinel-2) vs. temporal (HLS) and spectral (Sentinel-2) resolution for monitoring of highly heterogeneous forest ecosystems.
Rainforest deforestation is a major environmental challenge, causing loss of biodiversity, erosion and the release of stored carbon into the atmosphere contributing to climate change. Counteracting rainforest deforestation begins with large scale monitoring of individual events and finding their underlying primary drivers. The University of Maryland devised the Global Analysis and Discovery (GLAD) alerts for detecting changes in rainforests, providing both time and location information. These alerts are openly available at Global Forest Watch, but lack the classification of what initiated the change. The goal of this project is to classify the GLAD alerts into primary drivers over West-Kalimantan. Examples of drivers include road construction, mining, forest fire and agricultural activities.
To classify the alerts we aim to build a prototype pipeline using the freely available Sentinel-2 imagery. The envisioned classification system will classify GLAD alerts in monthly batches. For each batch a mosaic will be made from the months following the batch. This mosaic will be segmented using a Convolutional Neural Network and the segmentation will then be used to classify each of the GLAD alerts.
The aim is to use MAJA for generating cloud masks and WASP to generate a cloud-free mosaic. If it turns out to be unfeasible to use S2 within the project due to the cloud cover, it will be considered to use Planet imagery instead. If so, another cloud detection and mosaicing scheme will be investigated.
The suggested methodology for segmenting the image is semantic segmentation using a U-Net-like architecture. Currently there is no primary driver dataset for deforestation alerts to train such a network, hence a primary driver dataset will be collected by domain experts. In addition to the imagery and the labels, the input data can be enriched with other data sources. As an example, elevation models can help to differentiate classes. E.g. rivers always flow in gradient direction, while roads do not. Finally, only a tiny fraction of the GLAD alerts are labeled, but a huge amount of them are available from previous years. Building on the unlabeled dataset, a semi-supervised training scheme can be implemented.
Even though the deforestation alerts themselves are available, it is currently difficult to prioritise which to act upon. If successful, the project can help provide more information to local governments on which alerts demand attention. Although this project focuses on West-Kalimantan, the methodology is transferable to other regions given additional training data. We expect initial results by January 2022 and the project finishes June 2022.
Mosaic products are widely variable depending on the use case leading to disparate quality and pricing. For example, mosaic products suitable for free-to-use tools such as Google Maps satisfy different use cases than mosaic products for long-term land surface changes. Analysis Ready Mosaics (ARMs) consist of satellite images sourced from spatially and temporally variant sources composed into a geometrically and radiometrically seamless output product and includes accompanying metadata to accelerate automated, machine-driven information product generation.
ARMs resolve large and small geometric inconsistencies in input images, resulting in degraded sharpness and decreased radiometric accuracy. Radiometric inconsistencies of input images can lead to sharp differences along with image bounds and contamination of composited measurements. As we increase the area of interest, the computational complexity for correcting geometric and radiometric inconsistencies increases quadratically in both spatial and temporal dimensions, limiting the scope of the scale of many ARMs. Even after solving these technical challenges, there is still the question of selecting the most representative measurement for the final mosaic product. EarthMosaics is a service offered by EarthDaily Analytics that tackles these challenges to produce Analysis Ready Mosaics using satellite data at scale. In this talk, we highlight the geometric and radiometric differences present in the Sentinel-2 L2A datasets, demonstrate the capabilities of the EarthMosaics service to generate geometrically and radiometrically consistent ARMs, explore future ARMs, and use-cases for wide-area environmental mapping and monitoring as more diverse data sources come online such as the EarthDaily Constellation in 2023.
EarthDaily Analytics’s mosaics leverage the best available science at scale to build highly consistent products. An example of one common approach is to geometrically align optical data before compositing, called “block adjustment.” The block adjustment problem uses measurements between spatially overlapping datasets to derive optimal projection parameters for each individual image such that all input images are geometrically consistent. This approach has been shown to work well in small-scale mosaic applications where spatial and temporal extents are limited. However, in the case of large-scale mosaics, where increasing the breadth and depth of datasets causes the number of overlaps to increase quadratically, this approach quickly becomes computationally prohibitive. This work uses complexity reduction techniques to reduce resources required block adjustment while achieving consistent geometric improvements. Similarly, common radiometry artifacts in mosaics include ‘seamlines’ along image boundaries and other atmospheric differences such as clouds, shadow, and haze. EarthMosaics address these issues by rigorous data filtering to eliminate clouds, shadow and haze, comprehensive pixel selection of best available data, normalization of imagery to science missions, and seamless balancing throughout the entire mosaic. EDA’s ARMs also ensure these mosaics have accompanying masks produced that can be used to track measurements to their source and automatically filter contaminated pixels from clouds, shadows, and haze.
As data volumes continue to increase, mosaics have become an increasingly important tool for data reduction and analysis at scale. We continue to increase temporal revisit of constellations to weekly, daily, and even multiple times a day. Wholly new classes of mosaics are necessary to capture land-surface changes while maintaining data provenance and accountability. Mosaics that are updated automatically and iteratively with newly available data in areas of high value will become increasingly important. This will enable ongoing and regular monitoring of forests for inventory, fire hazard risks, fire extent and severity mapping. These products also provide health indicators on ecologically sensitive areas such as wetlands. Large-area, regularly updated ARMs provide a pulse of the earth using annual phenological changes and other land-surface changes to improve hydrological flood risk mapping, provide input variables for climate modelling, and provide policymakers and NGOs with the tools for remote stewardship and awareness.
Forest cover almost one third of the territories of Germany and of Luxembourg and provide a wide range of economic benefits and ecosystem services. At the same time, forests are among the ecosystems affected by regional impacts of climate change. Public and private forests are subject to several biotic and abiotic stressors which may lead to increased tree mortality, tree growth reduction and wood quality degradation, and reduction of amenity value. Increasing temperatures are expected to cause more frequent mass propagation of harmful insects (e.g. bark beetles and forest defoliators) as already seen in recent years. Additionally, under changing climate, an expansion of Mediterranean pathogen species to northerly countries is highly probable. To face this challenge, monitoring and mapping forest risk and its spatial extent has become an important issue in supporting environmental decision-making. Nevertheless, continuously operating networks for monitoring forests are point-based, which highly limits their effectiveness in space and time.
Remote sensing is a powerful tool as it allows for timely forest vitality assessment over larger areas. Stress and disturbances cause changes in the biochemical (pigments, water content) and structural (LAI, tree cover density) properties of forest such as trees or stands. Many studies have shown that these changes can be measured by different sensors, for instance pigment change in the visible to red edge bands, canopy water content in the short-wave infrared, and structure by laser scanning, as well as transpiration in the thermal infrared. Knowledge of these effects has been used to study single agents of disturbance, such as bark beetle induced disturbance, drought-induced tree physiological stress and mortality, and forest disturbance and recovery, among other types. However, often not only a single disturbance agent is present in a forest ecosystem; rather, forest vitality loss is driven by a complex and interwoven set of causes.
Recently, more efforts have been undertaken for developing sophisticated methods for mapping the occurrence of multiple FVL drivers using time-series analysis in combination with machine and deep learning approaches. The goal of this paper is to highlight the most current and efficient approaches, and data sources for mapping FVL drivers from a scientometrics perspective. We review the different methodologies for single and multiple FVL drivers, including most common earth observation (EO) sensors, as well as classification algorithms, input variables, geographical distribution, and the most frequent disturbance type. To account for the most up-to-date literature, we focused on articles published between 2010 and 2021.
Our preliminary results indicate that Landsat is the leading EO sensor, coupled with the random forest classifier. Spectral-temporal trajectories derived from time series of vegetation indexes or spectral bands are the predominant input variables for the classification. The geographical distribution of the current state-of-the-art evidence a larger amount of research conducted in North America and Europe compare to other regions. Disturbances such as bark beetle, fire, harvest, and wind throw are discussed in the majority of the studies. Nevertheless, few studies have explored other sensors, especially Sentinel-1 and Sentinel-2. Similarly, deep leaning approaches are not commonly implemented.
Scientometrics tools have led us to provide a recommendation on how to improve the current methods using the full capabilities of multi-sensor data fusion in combination with the latest development in deep learning.
In 2019 ESA funded the TomoSense campaign with the aim of exploring new concepts in forest monitoring. A forested area was imaged with several different sensors including Synthetic Aperture Radar (SAR) and laser scanners; in-situ measurements were carried out too. Radars were operated at different acquisitions modes whereas laser scans include both terrestrial (TLS) and aerial (ALS or LiDAR) surveys. Such variety of data enables estimates of biophysical parameters to be compared and assessed. Also, bistatic tomographic SAR acquisitions provided the opportunity of testing new potential space-borne configurations and corresponding retrieval algorithms.
The test site of TomoSense is located within the Kermeter area in the Eifel National Park in North Rhine-Westphalia, Germany. The Kermeter is an upland region, up to 528m above sea level, covered by one of the largest contiguous, deciduous forests in that region. It covers an area of 3,592 hectares. Beech woods dominate, in places with trees that are over 200 years old. Oak woods hold sway on the drier, southern slopes, interrupted by rocky outcrops. About 550 hectares still consisted of spruce trees, a consequence of reforestation measures after the Second World War. However, the spruce stock is continuously being reduced by thunderstorms, drought and bark beetle infestations in favor of deciduous woods.
The SAR dataset provided by MetaSensing is made of acquisitions at three frequency bands: P- (wavelength approximately 65cm), L- (20cm) and C-band (5cm). They all feature repeat-pass (tomographic) fully polarimetric acquisitions and opposite views. Furthermore, L- and C-band data are simultaneously acquired by two aircrafts flying in close formation providing two monostatic and two bistatic datasets at the same time (ping-pong mode). Tomographic acquisitions enable the forest layer to be imaged in 3D, the standard image pixel being replaced by voxels. This allows comparisons of the visibility of different forest features at different wavelengths, possibly highlighting synergies and complementarity. Backscattering structures placed at different elevations can be further characterized by their polarimetric signature thus revealing the electromagnetic interaction with the vegetation. The simultaneously acquired mono and bistatic stacks can be either processed separately or jointly. In the first case, the two 3D reconstructions reveal the different scattering mechanisms characterizing each configuration. In the second case they can be used to test a promising space-borne acquisition mode featuring correlation tomography. Correlation tomography exploits the correlations between mono-bistatic pairs only rather than all versus all correlations as in standard tomography. It implies that the vegetated area might be decorrelating between subsequent passes thus greatly relaxing the requirements on the time lag between them. In this configuration, two satellites should be flying paired rather than one as in standard repeat pass however, one satellite can be a low budget receiver only platform thus maintaining the whole system cheap.
Terrestrial laser scanner measurements were carried out by University College London (UCL). Measurements were gathered in correspondence of 5 locations spanning the range of forest types. Each plot is 0.25 – 0.5 ha (50 – 70 m on a side) depending on site and weather conditions. Operational steps included: data collection; co-registration of individual scans to provide a single point cloud at each site; provision of canopy height model (CHM) across each plot; extraction of individual tree point clouds over 20 cm DBH from each point cloud; estimation of tree volume and per-tree AGB using species-specific wood density values available from partners or literature.
Airborne laser scans were provided by CzechGlobe. The whole study area was explored by means of one Riegl LMS Q780 instrument retrieving a dense cloud of points. These points were georeferenced and analyzed to retrieve the three dimensional structure of the forest; in particular:
1. LiDAR flight lines was merged together for the Kermeter locality
2. Merged point cloud was divided into tiles
3. Noise filtering was performed for each tile
4. Classification to ground and non-ground points was performed
5. Digital Terrain Model (DTM) was calculated by means of TIN (Triangulated Irregular Network) of ground points
6. Digital Surface Model (DSM) was calculated by means of TIN of ground and non-ground points
7. normalized Digital Surface model (nDSM) also called canopy height model was calculated as difference of DSM and DTM
8. DTM,DSM and nDSM were exported in Geotiff format with spatial resolution 1,0m.
These products were used for pre-processing SAR data as well as for biophysical parameters comparisons.
The airborne campaign is supported by in-situ collection of the following forest parameters: tree species, height and diameter at breast height (dbh). Each parameter was measured at single tree level within up to 80 plots with size of 0.05 ha (circular plot with radius ≈ 12.5 m); the distance between any two plots is about 250 m. The plots to be sampled will be a subset of those from the permanent inventory established by Wald und Holz in 2011. Plot distribution provides a representative sample of species diversity within the illuminated area. The datasets will be made freely available at https://earth.esa.int/web/guest/campaigns and the documentation shall be sufficient to support further processing and analysis by third-parties.
All together this data makes one of the most complete airborne SAR datasets ever collected over a forest site. After extensive data calibration, three dimensional reconstruction resulted excellent and enabled scientific analyses. It is possible to analyze the impact of the vertical distribution of vegetation from TLS and UAV-LS on the radar signal. The relative contribution of the ground backscattering with respect to the tree could be assessed and explored varying wavelength, polarization and acquisition mode (monostatic or bistatic). This allows future missions to be tuned on the specific biophysical quantity of interest. Also, new algorithms for carrying out correlation tomography were developed. Quantitative analyses relating the resulting estimates with LiDAR data were produced. These numbers allow setting constraints to future space-borne missions in terms of tradeoffs between complexity and quality of the estimates. The analysis of this huge dataset is currently ongoing.
Near-real time detection of deforestation using satellite imagery has become a fundamental tool for environmental law enforcement in many regions, and should become more and more important as new countries engage to reduce or zero their deforestation figures.
Most of these countries are located in the tropics. Their forests are of uttermost value, as, other than having the greatest reserve of terrestrial carbon in the planet, they regulate global climate, host many different peoples and cultures, and have the highest biodiversity indexes among all Earth’s biomes. At the same time, these forests are also those which have the higher deforestation rates, as the global race for natural commodities such as meat, soy, rubber and palm oil grows exponentially.
Many satellite-based near real-time deforestation detection systems have been developed by governmental agencies, research institutes and NGOs to allow quick intervention on hot-spots of deforestation. Some of these systems have known significant success, effectively reducing deforestation in a short period of time, like the DETER system, in the Brazilian Amazon.
However, traditional near real-time detection systems such as DETER, that use optical imagery, face a big issue: cloud cover can block most of the satellite observations, making the system unusable during many months of the year in the tropical regions.
Knowing that, many of the deforesters have changed their ‘modus operandi’, slowly thinning the understory forest during the dry, cloud-free season, and later on, at the onset of the cloudy season, perform the definitive clear-cut deforestation, that will remain undetected for many months.
SAR-based systems come as a solution to this challenge. Most of the time, SAR imagery remains unaffected by the weather conditions, and provides a very reliable way to measure the forest canopy health throughout the year.
In this work, we review and compare four different operational near real-time monitoring systems based on ESA’s Sentinel-1 (S1) satellites data. All of these methods are based on the analysis of the dense time series made available by the S1 constellation, using different techniques such as Bayesian inference (RADD), adaptative linear thresholding (DETER-SAR), shadows detection using the Radar Change Ratio indicator (TropiSCO) and z-score anomaly detection (ONFI). By doing this comparison we aim to identify each method’s strengths and weaknesses, and to point out potential synergies that might lead to global detection improvements.
To perform a fair and thoughtful comparison, we have defined three different areas of interest (AOI) on the Amazon basin. Each one of these AOI faces different threats stemmed from different drivers of deforestation, such as illegal gold mining, expansion of soy crops, timber logging and forest to pasture conversion for livestock feeding. Once the AOI were defined, we simulated a one-year cycle of deforestation monitoring using the same input products and the original proposed workflow for each method. We assessed the comparison using standard measures such as Total Accuracy, Precision and Recall against reference data provided by the Mapbiomas Alertas manually verified deforestation warnings 2020 dataset (https://alerta.mapbiomas.org/). Special attention will be paid to the response of every method to the deforestation aroused from different drivers.
Reiche, J et al. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2). https://doi.org/10.1088/1748-9326/abd0a8
Bouvet, A. et al. (2018). Use of the SAR Shadowing Effect for Deforestation Detection with Sentinel-1 Time Series. Remote Sensing, 10(8), 1250. https://doi.org/10.3390/rs10081250
Doblas, J. et al. (2020). Optimizing near real-time detection of deforestation on tropical rainforests using Sentinel-1 data. Remote Sensing, 12(23), 1–31. https://doi.org/10.3390/rs12233922
Lardeux, C. et al. (2020). Operational use of Sentinel-1 radar data for near real-time detection of deforestation in french Guiana. Esa Living Planet 2019.
The world’s tropical dry forests are under substantial pressure. Much of them has been converted to agriculture and what remains is used in a variety of ways, and many of these land uses can lead to forest degradation. Whereas forest conversions are routinely assessed, forest degradation that is typically associated with only partial canopy removal, remains less clear. In addition, linking mapped disturbance patterns to land-use indicators, and here specifically indicators on land-use actors, is needed for understanding degradation agents, and thus to identify starting points for interventions to lessen degradation.
In our work, we used dense Landsat time-series to detect disturbances related to forest degradation across the entire Argentine Dry Chaco (about 489,000 km²). Using Google Earth Engine and the entire Landsat archive, we adopted a two-phase classification process to first map forest disturbances over a 30-years timespan and then second characterize the disturbance agents for all identified disturbance events. To map disturbances, we applied LandTrendr temporal segmentation to a set of vegetation indices to derive a set of disturbance metrics. These then served as input for a Random Forests ensemble classification to produce a binary disturbance map. For our second step, the attribution of disturbances, we first identified patches of discrete disturbance events. We then used a range of spectral variables and disturbance-shape-related variables for a second Random Forests classification to attribute each patch to one of several disturbance agents: (a) partial deforestation, referring to agricultural expansion that was not completed in the observation period, (b) fire, (c) selective logging, (d) drought impact and (e) flooding. Both maps were validated using best practice protocols, allowing us to provide unbiased area-estimates of both the total disturbed area and the area falling into specific disturbance agents.
Our best model produced a disturbance map with an overall accuracy of 79%, with a very balanced error distribution. A total of 8% (24,877 ± 860 km2) of the remaining forest in the Argentine Dry Chaco have been affected by forest disturbances between 1990 and 2017. We also found the disturbed area to vary strongly between years, with larger areas being disturbed during drought years. Attributing disturbances to specific agents showed that partial deforestation had the highest prevalence, followed by selective logging and fire. Assessing the spatial distribution of disturbance agents in relation to a range of geographic variables that capture land-use actors or activities highlighted a considerable effect of agricultural fields, roads and smallholder homesteads on the disturbance agent prevalence, particularly with logging and fires decreasing with distance and roads while increasing with distance to smallholders homestead up to 2 km. For the Chaco, a global deforestation hotspot, our analyses provide the first Landsat-based assessment of forest disturbance in remaining forests, highlighting the need to better consider such disturbances in assessments of carbon budgets and biodiversity change and the role of land-use practices in affecting the remaining natural vegetation in the Chaco.
Long-term monitoring is key to discriminating between degraded and intact forests. Remote sensing detects changes in canopy coverage. Disturbances arise from logging, burning, disease, or insect infestation, among others. Abrupt changes in large areas are easier to detect. However, it is also possible to detect it when it occurs in a progressive and slow way. Our case study focuses on a tropical montane cloud forest (TMCF) in central Veracruz. This forest is one of the most threatened ecosystems in the world and the most biodiverse in Mexico, considering its surface. Most of this habitat has been lost due to human encroachment and the remaining areas are at risk of disappearing if urban sprawl and the expansion of agriculture continues. The human population of this area has increased by 30% in the period 2000-2020, with a total population changing from 0.9 to 1.2 million inhabitants in 2020. We detected this situation also in the shade coffee agroecosystem, where traditional management preserves the tree canopy of the TMCF. We mapped the small-scale logging using five hundred sixteen-day composites of Moderate Resolution Imaging Spectroradiometer vegetation index products (MOD13Q1) at a spatial resolution of 250 m and using Breaks For Additive Season and Trend (BFAST) change detection algorithm implemented in R. BFAST is applied to time series to fit a segmented linear trend and a segmented seasonal model, iteratively into three components: trend, seasonal, and noise. We extracted the trend of the time series for those pixels that did not have a cut-off point, indicating the value of the slope and its probability. Our finding revealed those areas that have decreased gradually their greenness, characterized by a negative trend in the time series of vegetation index. The results showed that, in the last 20 years, 41.6% of the surface changed abruptly, 22% remained intact (without change or increase or decrease), 25% increased the trend of the NDVI slightly without change, the 9.7% increased considerably without change (pending trend > 2), 1.4% decreased slightly without change (72% of this area was located in urban environments or areas less than 1.5 km from them) and the 0.3% decreased considerably without change (92% of this area was located in urban environments or areas less than 1.5 km from them). We can observe, in these areas, deforestation processes associated with the preparation of land for sale as developable land, irregular settlements, extraction of wood and firewood, conversion to agricultural or livestock use. Our findings agree with the alarm of the scientific community when warning about an eventual dramatic interruption of natural corridors due to urban development. We appreciate the support of the project RTI2018-096561-A-100 (Ministerio de Ciencia, Innovación y Universidades. Spain), a particular thanks to Dr. Simon Mokondoko for providing us with the ground truth data of the project (NSF-CNH 1313804), to Dr. Miguel Cházaro for botanical support in field trips to identify plant species and vegetations, to ICTS-RBD for logistical support, the research stays in Mexico were carried out thanks to the collaboration agreement between Universidad de Sevilla and Universidad Veracruzana for jointly supervising a doctoral thesis and the collaboration agreement of the CSIC with the Universidad Veracruzana. This work is part of the CSIC-PTI TELEDETECT activities. Finally, we indicate that part of the processes in R were carried out in a Copernicus RUS services virtual machine.
Climate change is expected to worsen over the next century; this trend is likely to have consequences for the health of forests worldwide. In the same time, the forests are a stabilising factor for the climate. In this context, the new EU forest strategy for 2030 sets out a vision and concrete actions to improve the quantity and quality of EU forests to the new conditions, weather extremes and high uncertainty brought about by climate change. Choosing the adequate tree species (native or not-native) is one tool to reach this aim.
Douglas fir (Pseudotsuga menziesii) is one of the most common non-native tree species in European forests where it covers more than 800 000 hectares. Different studies shows that climate change will affect Douglas-fir in both its natural and introduced areas, especially due to the droughts conditions which can strongly impact the growth, productivity and wood density. However, climate will also impact the functioning of the other organisms that make up the forest ecosystem. Thus, there is an increased risk of diseases and pathogens due to changes in their distribution and impacts.
In Belgium, the Douglas fir is one of the most important species from the economical point of view (total area covered represents more than 23 000 ha). Spruces are much more widely planted in Belgium, however Douglas firs have several interesting characteristics appreciated by foresters. It grows very fast, there were so far no major pest causing massive dieback in Europe (unlike spruces which are attacked by Ips typographus), it has a wider ecological niche than spruce which makes it a better candidate to withstand more extreme weather conditions.
Unfortunately, the Douglas fir needle midge (Contarinia pseudotsugae s.l.) was first reported in Belgium, and all Western Europe, in 2015-2016. This small midge attacks the young needles ( < 1 year old) in the spring, and causes needle drop during the next winter. In Belgium, the progression of the needle midge and the levels of attack on host trees are monitored in Wallonia by field observations at approximately 150 sites. Since the discovery of the species, the infestation level has increased steadily: in 2015 most of the Douglas fir stands had 1-10% of the current year needles attacked; in 2018 the majority of the stands had 30-50% of the current year needles attacked, with higher levels of attack on young trees. In 2020 the infestation level increased further to an average of 50% of the current year needle attacked and the level of attacks in mature trees was similar to the level of attacks in younger stands. The combination of this new pest and attacks of several fungal pathogens (Phaeocryptopus gaeumanni, Siroccocus conigenus,…) contributes to a general degradation of the Douglas fir health status in Belgium, however, so far no massive dieback has been observed.
Sentinel-2 (S2) data, available since 2016, represent an opportunity to monitor the evolution of Douglas fir health. The main objective of this study is to evaluate the potential of SENTINEL-2 imagery to more accurately monitor the evolution of needle midge in Douglas fir stands. The study area is Wallonia, southern part of Belgium, where the public forest represents about 237 000 ha. About 12000 ha are covered by Douglas fir stands in public forests: 7900 ha monoculture and 4100 ha mixed species stands.
The ground truth data has been collected yearly in about 150 different Douglas fir parcels (monoculture or mixed) between 2015 and 2020. In a first step, only homogenous stands were considered for our study.
Different Vegetation Indices (VI) and biophysical variables (Leaf Area Index - LAI) were calculated from S2 images at parcel level. A sample model using LAI was fitted with ground data (2016-2020) estimating the level of needle midge infestation. The LAI model was validated for 2021 using ground data in order to quantify the quality of estimations.
The next step is to make complementary use of aerial orthophotos (25 cm resolution) available yearly to identify homogeneous zones with Douglas fir in mixed stands. These homogenous zones will be used to extract S2 VI and LAI temporal series.
A classification approach will be also tested using a priori information coming from forest inventory system available in Wallonia to enlarge the study to all Douglas fir stands from Wallonia.
Since the early 1990s, European ash (Fraxinus excelsior L.) has been affected by a lethal disease caused by the ascomycete fungus (Hymenoscyphus fraxineus). First observed in Poland, ash dieback now occurs in many parts of Europe. This emerging pathogen induces necrosis of leaf rachises, leaf wilting and shedding, bark necrosis and wood discoloration as well as shoot, twig and branch dieback. Since 2009, a survey of ash dieback has been conducted in Walloon Region (WR) and the first positive cases were identified in 2010.
Different studies have shown that the mortality rate depends on age of trees (younger trees are more affected), the landscape characteristics (higher mortality on humid soil conditions) and the management of stands (lower tree density decreases the mortality rate). Until now, the recorded mortality rate varies up to maximum 85-90%, being never 100%. These results confirm the hypothesis of ash trees naturally tolerant to the disease. The identification of such trees using aerial/satellite imagery is the objective of this study.
The study area is the WR, South part of Belgium, where the public forest represents about 237 000 ha. About 5100 ha are ash stand (pure or mixed forest), pure forest stands being often small parcels (< 1 ha). Our study has been realised at parcel and pixel level using Sentinel-2 (S2) images and at tree level using aerial orthophotos (5 cm of spatial resolution from UAV acquisitions over some specific parcels and 25 cm from yearly aerial data set covering WR). Hyperspectral imagery, more adapted to this application but not available on past years, will be used in the next step.
Two ash plantations (37 year-old trees in 2021) were monitored since 2015 and the evolution of dieback was quantified in terms of leaf shedding, bark necrosis, twig and branch dieback for all existing trees (644 and 663 living trees in in 2021 in the two stands respectively).
Different vegetation indices (VI) and biophysical variable (Leaf Area Index – LAI) were calculated from S2 imagery and temporal evolution at parcel level was used to monitor the impact of ash diebackin the two stands.
Two UAV campaigns were also conducted over the two ash tree stands in 2021. The flights were performed with a DJI Matrice 210 equipped with a Micasense RedEdge-M. It has allowed to stress differences between the reflected spectra of asymptomatic trees (i.e. trees without crown defoliation) and ash tree displaying different levels of dieback.
Random Forest (RF) classification at pixel level and object level were tested for the two considered ash-tree parcels using aerial orthophotos acquired in 2016. A preliminary step consisting in automatic crowns delineation and identification using a LIDAR digital height model has been carried out for object level classification.
The production of epicormics shoots in response to the disease give a relatively healthy appearance seen from the upside to the affected trees. In reality, they can present necrosis at the base and the health status could be completely different. The use of hyperspectral imagery could help to better discriminate the healthy trees and the trees that looks healthy du to presence of epicormics shoots.
The next step is to use the calibrated model at pixel level on other ash-tree stands.
Satellite-based Earth Observation (EO) offers great potential to monitor the status and changes on the earth’s surface regularly and on a large scale. Over the past decades the increasing number of EO sensors and more frequent revisit rates have led to the generation and availability of large datasets. The increasing amount of data, however, requires tools for fast data access, harmonization and processing. In the openEO Platform (https://openeo.cloud/) a federated processing and data access structure is currently being developed and offers fast preprocessing and analysis of extensive EO data on a large scale. It relies on the openEO API and enables the EO processing on several cloud back-ends by providing additional clients and intuitive interfaces. Based on Analysis Ready Data (ARD) and the processing capabilities in openEO Platform, diverse essential analytical building blocks are explored iteratively to foster the best implementation and concatenation of new and existing functions based on large-scale application of EO-data at scale. Two such analytical building blocks in the openEO platform project focus on forest areas since they cover up to one third of the world’s surface and are responsible for key environmental services such as natural risk and disaster prevention, carbon sequestration, water storage and biodiversity. Due to climate change, deforestation and forest degradation, forested areas are under pressure and processes and tools are urgently needed to map and monitor changes consistently, continuously, extensively and through performant functions. The two forest-related analytical building blocks currently being implemented are: near real-time forest change detection and retrieval of forest fractional canopy cover (FCC).
In the forest change detection algorithm pixel-based time-series models are fit to Sentinel-1 and Sentinel-2 ARD for a historical reference period, from September 2016 until end October 2018. The time-series model consists of a harmonic model accounting for seasonality with a complexity depending on data availability, forest types and inherent noise in the data. Predicting each new Sentinel-1 and Sentinel-2 acquisition based on the fitted models allows to detect changes based on deviations from the prediction. We apply the workflow for the European Alps to map the impact of the Vaia storm in 2018. Afterwards, the capabilities of openEO platform are extended by a Random Forest regression modelling. We use very high-resolution (VHR) imagery from PlanetScope to calculate the fractional canopy cover within the spatial resolution of medium-resolution sensor information from Sentinel-1 and Sentinel-2. The resulting regression models are used to calculate the forest’s fractional canopy cover over central Europe directly on the platform back-ends with dedicated openEO processes.
As a result of the forest cover change analysis two new openEO processes were incorporated in the federated cloud environment. First, the user can fit an arbitrary function to a pixel time series. In a second step the function can be used to predict the corresponding values for any day of the year based on the pre-computed model. The resulting forest change maps are currently being validated with reference data of local authorities. Two other processes are currently being implemented for the retrieval of the fractional canopy analysis. The openEO processes will be extended by random forest regression models in order to predict the FCC on a larger scale with the ARD available in the back-ends. The results are two functions: one to train and construct the model, one to apply a random Forest model to a set predictor raster stored in multidimensional data-cubes. Both analytical building blocks have been implemented as open source and are fully reproducible. This way they can serve as template to implement other applications on different study areas or further tailored to the requirements of each individual application.
In 2021, Qin et al.’s study sheds new insight on forest degradation. Authors found that forest degradation contributed three times more (73%) to the loss of above-ground biomass (AGB) than deforestation (27%), insofar as the area of degradation exceeds that of deforestation in Brazilian Amazon. This highlights the unrecognized responsibility for forest degradation, which encompasses everything that damage the forest without completely cutting it down: selective cutting, firewood collection, fires, or drought. Damage that is less easily detectable than areas razed to the ground. Therefore, it is at least as important to monitor degradation than deforestation, but it is also more difficult. Currently, detection of degraded forest relies on manual delimitation, using a wide variety of contextual information such as location, season of image acquisition and neighboring landscape, in addition to the pixel values. Contextualization offers considerable extra information, particularly for such challenging classes. However, the time-consuming task of photo-interpretation is less suitable for large-scale LULC projects.
Our objective is to map degraded forests occurring in 2021 in Guinea (West Africa), automatically, using as much as possible open-source tools to promote their dissemination to users, namely to the local Guinean photo-interpretation team. In particular, we have implemented deep learning techniques because they consider the pixel neighborhood in the classification process, like a photo-interpreter who takes context into account. Such techniques have only rarely been implemented for forest degradation detection (Safonova et al., 2019; Wagner et al., 2020).
This study has emerged within the framework of the Agro-Ecological Zoning of Guinea (ZAEG) project, funded by AFD (French Development Agency), where a 2021 Land Use Land Cover (LULC) map of the Guinean territory is requested by the Guinean Ministry of Agriculture. Our study area is in the forest massif of Ziama (9°17'12.75 "W, 8°22'18.91" N), approved as a biosphere reserve by UNESCO in 1980 and threatened by selective cutting.
Our work, that is built on Sentinel-2 images and convolutional neural networks (CNN), consists in two steps: creation of a training dataset, and implementation tests of the neural networks.
The first step of the study was to create a large training dataset. Currently, labeled training datasets are scarce and take a long time to create. Here, we built a training dataset of labeled degraded forests, so that the model can learn and identify degraded forests in new images that it hasn't seen before. For this purpose, 16 Sentinel-2 images were selected between 2015 and 2021, covering Ziama and another protected massif, Diecke. Images acquired in 2021 were set aside for the evaluation of the models. Degraded forests were delineated manually using QGis software. Addition of temporal tiles provides the model with representations of degraded and dense forests under different radiometric conditions for better generalization. 1.5 million ha were labelled by photo-interpretation, supported by ground control points in Ziama massif and the help of local foresters.
The second step is to build a deep learning model, involving the U-Net architecture (Ronneberger et al., 2015). Analysis of temporal profiles of Sentinel-2 data shows that derived biophysical attributes of the canopy such as Moisture stress Index (MSI), Canopy Water Content (CWC) and Leaf Area Index (LAI) differ with the level of forest degradation. Thus, the CNN is built and applied on the stack of the Sentinel-2 bands and these additional canopy attributes as input data. U-Net performed great results in terms of map rendering and accuracy statistics (kappa > 90%) on newly acquired images in 2021 (unused for model training). In addition, U-Net successfully detects degraded forests in the Nimba massif on which it has not trained (kappa > 85%).
Our results show that we manage to detect Guinean degraded forests with a deep learning method that i) does not require the effort of photo-interpretation of new data (neither for tackling a new study area or a different year), and ii) is operational and transferable to a LULC project. We solved the multi-class classification through semantic segmentation with a dataset built and labeled for our task, showing combination of satellite expertise and artificial intelligence can leverage results for scientists and LULC operational projects.
References
Qin, Y., Xiao, X., Wigneron, J.-P., Ciais, P., Brandt, M., Fan, L., Li, X., Crowell, S., Wu, X., Doughty, R., Zhang, Y., Liu, F., Sitch, S., Moore, B., 2021. Carbon loss from forest degradation exceeds that from deforestation in the Brazilian Amazon. Nat. Clim. Chang. 11, 442–448. https://doi.org/10.1038/s41558-021-01026-5
Ronneberger, O., Fischer, P., Brox, T., 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation, in: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (Eds.), Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer International Publishing, Cham, pp. 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
Safonova, A., Tabik, S., Alcaraz-Segura, D., Rubtsov, A., Maglinets, Y., Herrera, F., 2019. Detection of Fir Trees (Abies sibirica) Damaged by the Bark Beetle in Unmanned Aerial Vehicle Images with Deep Learning. Remote Sensing 11, 643. https://doi.org/10.3390/rs11060643
Wagner, F.H., Sanchez, A., Aidar, M.P.M., Rochelle, A.L.C., Tarabalka, Y., Fonseca, M.G., Phillips, O.L., Gloor, E., Aragão, L.E.O.C., 2020. Mapping Atlantic rainforest degradation and regeneration history with indicator species using convolutional network. PLoS ONE 15, e0229448. https://doi.org/10.1371/journal.pone.0229448
1. INTRODUCTION
The Convention on Biological Diversity identified the Miombo Woodlands (dominated by Brachystegia and Julbernardia species) as a tipping point of global importance, where changes in ecosystem functioning are significant enough to cause large and irreversible changes for biodiversity, ecosystem services and human well-being at various scales. One important mechanism in this context is the region’s functioning as a “conveyor belt” to transport humidity from the Congo rainforests towards southern arid savannas (Leadley et al. 2010), such that change processes are likely to impact on continental climate dynamics, too. In recent years different transformation processes have started to convert and fragment the Miombo in many regions, while others remain yet intact. These include the expansion of subsistence agriculture, the establishment of commercial agriculture scheme and urban expansion and investments in transportation infrastructure, partially related to internal and international migration movements. Localized remote sensing-based case studies have been documenting these changes (Mayes et al. 2015; Röder et al. 2015; Schneibel et al. 2016) and global studies suggest that in the global rush for land, acquisition processes might lead to rapidly advancing agroindustrial crop production frontiers similar to comparable regions in South-America (Gasparri et al. 2015). Thus, continuous monitoring and labelling of change processes is of crucial importance to support land management in coping with transformation dynamics while safeguarding the integrity of precious ecosystems. However, the size of the ecoregion with roughly 3.78 million km2 constrains analyses and, in case advanced time series analysis techniques are targeted, involves massive computation efforts. We therefore used the Google Earth Engine™ for data processing, and employed MODIS-EVI data to evaluate the potential of analyzing woodland deforestation and degradation based on time series segmentation and break point detection with subsequent analysis of underlying drivers.
2. METHODS
2.1. Data
We focused our analysis on the extended Miombo ecoregion as represented in the WWF-Terrestrial Ecorgions of the World. To facilitate processing, we computed monthly aggregated EVI values for the period 2000 – 2018 based on the version 6 16-day EVI product (MOD13Q1.006). This was preferred over higher resolution Landsat data due to severe data gaps during the rainy season especially in the northern sections of the target area. In many regions, lakes and also rivers show great periodicity over several years, therefore we did not include any corresponding flags and included them in the first level analyses. The MODIS burned area (MCD64A1), the CCI Land Cover Product by ESA and WorldClim 2 were used to facilitate data interpretation.
2.2. Methods
We implemented a time series analysis method similar the Continuous Change Detection and Classification (CCDC) method developed by Zhu & Woodcock (Zhu and Woodcock 2014). We modelled the time series in a reference period using ordinary least squares regression of the first order harmonic derived from the discrete Fourier transform of the VI time series (Moody and Johnson 2001; Udelhoven 2011). Coefficients for intra- and inter-annual change were employed to predict VI for the next date. We then evaluated whether the next observation could be reliably predicted using this set of coefficients or whether the RMSE exceeded a preset tolerance. If this was the case in a set number of instances, the corresponding breakpoint was stored and the procedure repeated for the time series following the breakpoint. Else the reference window was moved along the time line. Different settings for reference period width, RMSE factor of the prediction and number of required exceedances were tested in order to retrieve consistent breakpoint patterns while avoiding noise, and phase and magnitude filtering was applied to procure the final breakpoint database. Then different phenology-related metrics (mean EVI, Trend, Amplitude, Phase, Magnitude of Change and RMSE) where calculated for each segment of the time series.
3. RESULTS
The time series analysis yielded various temporal metrics that describe the overall development, a variable number of breakpoints for each pixel, and a variable set of metrics for each pixel depending on the number of time series segments. Concomitant with the length of the observation period, most regions are dominated by a single breakpoint, while two or more breakpoints are rare. Hot spots of change are mainly found in the central Miombo regions in Angola and Zambia; in particular in the former this is largely owed to a significant expansion of subsistence agriculture following the end of the civil war in 2000 with subsequent repatriation of refugees and investments in infrastructure.
In Zambia and the Democratic Republic of Congo mining operations, timber extraction and the establishment of large irrigation schemes were found major drivers of change. The threat of widespread conversion of woodlands is particularly evident in southern Malawi and most of Mozambique, where consistent negative trends occur. These are especially significant in the South (including bordering regions in South Africa) where extensive large-scale irrigation schemes were recently established.
Further information products include the time and magnitude of maximum change), which provides insights into spatio-temporal patterns that may differ between continual expansion of subsistence agriculture opposed to simultaneous clearings for the establishment of large agricultural schemes, mines etc. On the other hand, information on their time of occurrence may be related to underlying drivers such as policy shifts or the establishment of infrastructure that facilitates access to previously remote hinterland areas.
4. DISCUSSION
Validation of time series based products across large areas is hardly possible in a quantitative sense. We selected various reference areas that are representative of key processes and jointly evaluated phenological properties of the last segment, trend and breakpoint parameters with temporal pro profiles and corresponding Landsat images. We found these to confirm the credibility of our approach, which was further corroborated by local case studies assessing time series at higher resolution (Cabral et al. 2011; Röder et al. 2015; Schneibel et al. 2017a; Schneibel et al. 2017b).
In comparison to the original CCDC algorithm we used a single VI-band as opposed to various reflectance bands, which was found to be adequate for vegetation disturbance mapping and reduces computation time. The fact that a minimum period is required to fit the initial model limits the detectability of breakpoints in the early period, while the moving window approach prevented overestimating breakpoints due to minor change processes. The Fourier approach was found to be efficient in representing phenological dynamics; however, it may not be ideal in cases where double-vegetation-peaks are dominant or underlying data are highly asymmetric. Ideally, the approach should be implemented based on Landsat rather than MODIS data to facilitate consideration of longer periods and a higher spatial resolution..
5. REFERENCES
Cabral, A., Vasconcelos, M., Oom, D., & Sardinha, R. (2011). Spatial dynamics and quantification of deforestation in the central-plateau woodlands of Angola (1990–2009). Applied Geography, 31, 1185-1193
Gasparri, N.I., Kuemmerle, T., Meyfroidt, P., le Polain de Waroux, Y., & Kreft, H. (2015). The Emerging Soybean Production Frontier in Southern Africa: Conservation Challenges and the Role of South-South Telecouplings. Conservation Letters
Leadley, P., Pereira, H.M., Alkemade, R., Fernandez-Manjarrés, J.F., & Walpole, M.J. (2010). Biodiversity scenarios: Projections of 21st century change in biodiversity and associated ecosystem services. Montréal, Québec, Canada: Secretariat of the Convention on Biological Diversity
Mayes, M.T., Mustard, J.F., & Melillo, J.M. (2015). Forest cover change in Miombo Woodlands: modeling land cover of African dry tropical forests with linear spectral mixture analysis. Remote Sensing of Environment, 165, 203-215
Moody, A., & Johnson, D.M. (2001). Land-Surface Phenologies from AVHRR Using the Discrete Fourier Transform. Remote Sensing of Environment, 75, 305-323
Röder, A., Pröpper, M., Stellmes, M., Schneibel, A., & Hill, J. (2015). Assessing urban growth and rural land use transformations in a cross-border situation in Northern Namibia and Southern Angola. Land Use Policy, 42, 340-354
Schneibel, A., Frantz, D., Röder, A., Stellmes, M., Fischer, K., & Hill, J. (2017a). Using Annual Landsat Time Series for the Detection of Dry Forest Degradation Processes in South-Central Angola. Remote Sensing, 9, 1-14
Schneibel, A., Stellmes, M., Röder, A., Finckh, M., Revermann, R., Frantz, D., & Hill, J. (2016). Evaluating the trade-off between food and timber resulting from the conversion of Miombo forests to agriculatural land in Angola using multi-temporal Landsat data. Science of the Total Environment, in print
Schneibel, A., Stellmes, M., Röder, A., Frantz, D., Kowalski, B., Haß, E., & Hill, J. (2017b). Assessment of spatio-temporal changes of smallholder cultivation patterns in the Angolan Miombo belt using segmentation of Landsat time series. Remote Sensing of Environment, 195, 118-129
Udelhoven, T. (2011). TimeStats: A Software Tool for the Retrieval of Temporal Patterns From Global Satellite Archives. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 4, 310-317
Zhu, Z., & Woodcock, C.E. (2014). Continuous change detection and classification of land cover using all available Landsat data. Remote Sensing of Environment, 144, 152-171
Global deforestation is threatening the planet’s forests. It is one of the land cover transformations with the most severe global impacts on climate, biodiversity, and societies. Sustainable management and protection of forests remains one of the most important challenges of today. As such, forests are subject of several international treaties, for instance, their protections and preservations are of concern in the Sustainable Development Goals (SDGs) established by the United Nations.
In Madagascar, deforestation rates have been particularly high in the last 50 years although the country is one of the most important priorities for biodiversity conservation for the global community. Accurate forest change maps are the basis for enabling measures that counteract deforestation and promote sustainable forest protection. However, accurate and locally tuned forest change maps are not currently available for Madagascar's northern regions.
Against this background, this study investigated how the forest cover in the Manongarivo Special Reserve, a protected area in a largely unexplored region of northern Madagascar, has developed from 1990 to 2020. With the implementation of LandTrendr (a spectral-temporal segmentation algorithm for disturbance and recovery detection) in Google Earth Engine, Landsat time series were analyzed, and forest change maps created. Spatial and temporal patterns of forest change were identified and compared to other protected areas in the region. The accuracy of the results was then compared to the Global Forest Change Dataset (GFCD) and validated using the standardized sampling protocol based on high resolution Google Earth imagery. The second objective of the study was to quantify the influence of factors that were driving deforestation in the study area using spatial logistic regression modeling.
The results show a striking increase in forest losses in particular within the past 10 years. Deforestation threatens the study area mostly from the outer edges. The accuracy assessments suggest that the data is more accurate than the GFCD, with accuracies of 96 % for this study and 85 % for the GFCD. The most important drivers for deforestation identified were the elevation and slope of the terrain as well as the distance to infrastructure. The results can be used for policymakers to initiate new measures on tackling the issue of deforestation more successfully.
Forests are home to many of the world's species and provide key services to humanity. Currently most of the world’s remaining tropical forest is degraded or consists of secondary forest located inside human-modified landscapes. To understand the role that forests play we need to understand the dynamics of changes in forested areas, not only focusing on deforestation, but also processes of regrowth and secondary succession. Remote sensing is now widely recognized as an invaluable tool for monitoring forest change, of which the Global Forest Watch is probably the most known. However, currently many algorithms tend to only detect one disturbance, and are thus unsuitable to address the need for better information on secondary forest succession and for assessing forest restoration activities. The AVOCADO (Anomaly Vegetation Change Detection) algorithm is a new forest change algorithm that detects forest disturbance and regrowth in a continuous way across a diversity of forest types (e.g. dry and seasonal forests). The algorithm makes use of the flexibility of kernel density estimations to create a forest reference phenology, taking into account all historical phenological variations of the forest rather than smoothing these out by curve fitting. The AVOCADO algorithm allows the detection of anomalies and accompanying likelihood measure. We tested the performance of the algorithm in three contrasting sites, ranging from tropical rainforest (Peru), to moist tropical forest (Côte d'Ivoire), to dry miombo forest ecosystems (Tanzania). Each site had also different data densities and forest change types (e.g. shifting cultivation in Peru and selective logging in Tanzania). Our approach produced in general high overall accuracies (> 90%), even in the more challenging dry forest ecosystems. We showed that the algorithm is capable of capturing both small-scale (gradual) changes (e.g. selective logging) and the multiple changes associated to shifting cultivation. The performance of the algorithm has been shown at regional scale, but for larger scale studies a careful assessment of the different forest types is essential. From the output change maps spatio-temporal trends in the proportions of intact forest, secondary forest and non-forest can be derived. This information is for example useful for studies related to secondary forest, or for assessing suitable forest restoration areas. This open-source user-friendly algorithm can be used with different satellite sensors (e.g. Sentinel, Landsat, or a combination of sensors). Please visit the online tutorial available: http://www.pucv.cl/uuaa/labgrs/proyectos/AVOCADO.
Deforestation leads to fragmentation of the remaining forest patches. It has been shown that fragmentation has negative impacts on the forest edges such as alterations in microclimate, increase of air temperature, increase tree mortality, vegetation structure and productivity changes (Laurance et al., 1997; Laurance et al. 1998; Murcia 1995; Mendonça et al. 2015). However, most edge effects studies focus on temperate and tropical forests, such as the Amazon. These reported studies argue that in tropical biomas, fragmentation alters biomass and carbon stock patterns (Putz et al., 2014; Coelho et al., 2020) in the border of the forest mainly in the first 300 meters Nascimento and Laurance (2004). This study evaluated this aspect based on field data assessed the transition area between Cerrado and Amazon biomes known as “arc of deforestation”, which is considered a hotspot for biodiversity and accounts for 85% of the areas that were deforested between 1996 to 2005 (Macedo et al. 2012; INPE 2011) . In these transition areas, the laws that support the protection of forests are weaker and unmanaged. One example is the environmental legislation that defines the amount of natural vegetation that has to be preserved during new deforestation for farmland (80% in the Amazon, 20% in the Cerrado biome) (Soares-Filho et al. 2014). Furthermore, the Cerrado biome has a major influence on the water resources of large basins, such as the Amazon. We evaluated the effect of forest fragmentation on the vegetation productivity based on a long-term NDVI time series from Landsat data in the transitional area between the Cerrado and Amazon biome in the state of Mato Grosso, Brazil. Two trend analysis approaches have been applied to investigate possible edge effects using vegetation index (NDVI) from freely available 36 years time series satellite images (1984 to 2020). Our findings showed a positive significant change in NDVIs related to the distance from the nearest edge of the larger fragments. Thus, the closer the vegetation was to the edge, the lower their respective NDVI value was. For the smaller fragments, most of the years had no or minimal significant level between distance and NDVI. There was a quick recovery of NDVI after deforestation impact in all fragments, but this does not mean that the properties of the vegetation re-establishment was proportional. Furthermore, long-term edge effect patterns reported in the Amazon biome can not be extrapolated to Cerrado, most probably to the higher resilience of the tree species found in this transition zone between Cerrado and Amazon biomes. Therefore, more detailed studies are needed to understand the long-term dynamics of edge effect in the different biomes, including changes in biomass and carbon. We argue that NDVI can be used to identify edge effects in the ecotone region of the Cerrado and Amazon biome.
Coelho, A.J.P.; Magnago, L.F.S.; Matos, F.A.R.; Mota, N.M.; Diniz, E.S.; Meira-Neto, J.A.A. (2020) Effects of anthropogenic disturbances on biodiversity and biomass stock of Cerrado, the Brazilian savanna. Biodiversity and Conservation. https://doi.org/10.1007/s10531-020-02013-6
INPE (2011): Program for the Estimation of Amazon Deforestation (PRODES). INPE. São José dos Campos.
Laurance, W. F.; Ferreira L.V.; Rankin De Merona J.M.; Laurance S.G.; Hutchings R.W.; Lovejoy T.E (1997): Biomass Collapse in Amazonian Forest Fragments. In Science 278 (5340), pp. 1117–1118. DOI:10.1126/science.278.5340.1117
Laurance, William F.; Ferreira, Leandro V.; Merona, Judy M. Rankin-De; Laurance, Susan G.; Hutchings, Roger W.; Lovejoy, Thomas E. (1998): Effects of Forest Fragmentation on Recruitment Patterns in Amazonian Tree Communities. In Conservation Biology 12 (2), pp. 460–464. DOI:10.1111/j.1523-1739.1998.97175.x
Macedo, Marcia N.; DeFries, Ruth S.; Morton, Douglas C.; Stickler, Claudia M.; Galford, Gillian L.;Shimabukuro, Yosio E. (2012): Decoupling of deforestation and soy production in the southern Amazon during the late 2000s. In Proc. Natl. Acad. Sci. U. S. A. 109 (4), pp. 1341–1346.DOI:10.1073/pnas.1111374109.
Mendonça, Augusto H.; Russo, Cibele; Melo, Antônio C.G.; Durigan, Giselda (2015): Edge effects in savanna fragments: a case study in the cerrado. In Plant Ecology & Diversity 8 (4), pp. 493–503. DOI:10.1080/17550874.2015.1014068.
Murcia, Carolina (1995): Edge effects in fragmented forests: implications for conservation. In Trendsin Ecology & Evolution 10 (2), pp. 58–62. DOI: 10.1016/S0169-5347(00)88977-6.
Nascimento, Henrique E. M.; Laurance, William F. (2004): Biomass dynamics in Amazonian forest fragments. In Ecological Applications 14 (sp4), pp. 127–138. DOI: 10.1890/01-6003
Pütz, Sandro; Groeneveld, Jürgen; Henle, Klaus; Knogge, Christoph; Martensen, Alexandre Camargo; Metz, Markus et al. (2014): Long-term carbon loss in fragmented Neotropical forests. In Nature communications 5, p. 5037. DOI: 10.1038/ncomms6037.
Soares-Filho, Britaldo; Rajão, Raoni; Macedo, Marcia; Carneiro, Arnaldo; Costa, William; Coe,Michael et al.(2014): Land use. Cracking Brazil's Forest Code. In Science (New York, N.Y.) 344 (6182),pp. 363–364.DOI: 10.1126/science.1246663.
The ESA Forest Carbon Monitoring (https://www.forestcarbonplatform.org/) project is developing Earth Observation (EO) based, user-centric approaches for forest carbon monitoring. To respond to the needs of various stakeholders, forest carbon accounting based on forest inventorying requires precise and timely estimation of forest variables at various spatial levels accompanied by verifiable uncertainty information (Herold et al., 2019; Miettinen et al., 2021). A common approach in forest mapping is to combine reference data with auxiliary EO datasets for use with model-assisted or model-based inferential approaches, depending on availability of probabilistic samples of reference data (McRoberts et al., 2020, Ståhl et al., 2016).
In this presentation, we report outcomes of algorithm selection and trade-off analyses in the project. The primary focus of these analyses is to compare performances of several combinations of EO data and models/methods in forest structural variable prediction, and forest change mapping to support carbon stock estimation. Forest variables include growing stock volume, above ground biomass, tree height, diameter at breast height and tree species. EO datasets include satellite optical images (ESA Sentinel-2) and synthetic aperture radar (SAR) data acquired by ESA’s Sentinel-1, JAXA’s ALOS-2 PALSAR-2 and DLR’s TanDEM-X sensors.
The most suitable approaches in terms of prediction accuracy and with respect to combinations of EO data and forest variable prediction models and will be chosen for further implementation on the Forestry Thematic Exploitation Platform (https://www.f-tep.com) and future demonstration over dedicated areas. Several algorithms to be tested are already available on the Platform. Benchmarking will be performed over seven test sites in Europe located in Finland, Ireland, Romania, Spain and Switzerland, and over a tropical forest area in Peru.
The candidate methods include well-known machine learning and nonparametric approaches such as support vector machines, random forests and k-NN methods (Stelmaszczuk-Górska et al., 2018, Esteban et al., 2020, Antropov et al., 2017). Physics-based SAR and interferometric SAR models (e.g., Santoro et al. 2011, Kugler et al., 2014, Olesk et al., 2016) and an in-house, semi-supervised method (Häme et al., 2001, 2013) will be examined as well. Noteworthy, several of the candidate approaches are multivariate in that they can predict multiple forest structural response variables simultaneously. Complementarity of various optical and SAR datasets for forest mapping will be studied. Uncertainty estimation will be done using suitable closed-form parametric and non-parametric approaches in forest variable mapping. For forest disturbance mapping, post-stratified estimators for area and area change parameters and associated uncertainties (Cochran, 1977), as demonstrated by Olofsson et al. (2014), will be used for accuracy assessment (Tomppo et al., 2021).
References
Antropov, O., Rauste, Y., Häme, T., Praks, J. (2017) Polarimetric ALOS PALSAR time series in mapping biomass of boreal forests. Remote Sensing, 9(10):999.
Cochran, W.G. Sampling Techniques, 3rd ed.; Wiley: New York, NY, USA, 1977.
Esteban, J., McRoberts, R.E., Fernández-Landa, A. et al. (2020) A model-based volume estimator that accounts for both land cover misclassification and model prediction uncertainty. Remote Sensing 12, 3360.
Häme, T., Kilpi, J., Ahola et al. (2013). Improved mapping of tropical forests with optical and SAR imagery, Part I: Forest cover and accuracy assessment using multi-resolution data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(1), 74-91.
Häme, T., Stenberg, P., Andersson, K., et al. (2001) AVHRR-based forest proportion map of the Pan-European area. Remote Sensing of Environment 77:76-91.
Herold, M., Carter, S., Avitabile, V. et al. (2019) The role and need for space‑based forest biomass‑related measurements in environmental management and policy. Surveys in Geophysics 40: 757–778.
Kugler, F., Schulze, D., Hajnsek, I., et al., (2014) TanDEM-X Pol-InSAR performance for forest height estimation, IEEE Trans. Geoscience Remote Sensing, 52(10), 6404-6422.
McRoberts, R. E., Næsset, E., Sannier, C., et al. (2020) Remote sensing support for the gain-loss approach for greenhouse gas inventories. Remote Sensing 12(11):1891.
Miettinen, J., Rauste, Y., Gomez, S., et al. (2021) Compendium of research and development needs for implementation of European Sustainable Forest Management Copernicus Capacity; Version 2. Available at: https://www.reddcopernicus.info/wp-content/uploads/2021/06/REDDCopernicus_RD_Needs_SFM_V2.pdf
Olesk, A., Praks, J., Antropov, O. et al. (2016) Interferometric SAR coherence models for characterization of hemiboreal forests using TanDEM-X data. Remote Sens. 2016, 8, 700.
Olofsson, P., Foody, G.M.; Herold, M.; Stehman, S.; Woodcock, C.; Wulder, M. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–47.
Santoro, M., Beer, C., Cartus, et al. (2011). Retrieval of growing stock volume in boreal forest using hyper-temporal series of Envisat ASAR ScanSAR backscatter measurements. Remote Sensing of Environment, 115, 490 – 507.
Ståhl, G., Saarela, S., Schnell, S.,et al. (2016) Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation. For. Ecosyst. 3, 5.
Stelmaszczuk-Górska, M. A., Urbazaev, M., Schmullius, C., & Thiel, C. (2018). Estimation of above-ground biomass over boreal forests in Siberia using updated in situ, ALOS-2 PALSAR-2, and RADARSAT-2 data. Remote Sensing, 10.
Tomppo, E., Ronoud, G., Antropov, O, et al. (2021) Detection of forest windstorm damages with multitemporal SAR data—A case study: Finland. Remote Sensing, 13(3):383.
Earth observation capacity to map terrestrial ecosystems, monitor forest disturbance and forest regeneration, under climate change scenarios, plays a fundamental role in supporting sustainable forest ecosystem management. Mapping terrestrial ecosystems and identifying their disturbances using satellite Earth observation data has improved over the past years, with the development of many algorithms taking advantage of dense time series at high spatial resolution. A significant contribution to land monitoring is offered by Copernicus Sentinel 1 and Sentinel 2 satellite constellations, whose high revisit frequency, observation scenario and ensured continuity is encouraging the development of operational monitoring services, to support sustainable ecosystem management.
In this frame, here we present results from the application of various analytical methods to identify a wide range of forest disturbances, from moderate to severe, caused by natural and anthropic phenomena or factors. Temporal approach is applied to detect forest disturbances, resulting spatio temporal patterns and spectral information are used to identify the underlying cause, in particular related to wildfires, forest clearcut logging and insects pest outbreaks. Near real time forest change assessment, based on the analysis of both temporal and spectral domains, provides updated information to support ecosystem surveillance activities. The analysis of full time series from multiple high spatial resolution satellite data sources, rigorously spatially co-registered, allows the monitoring of forest phenology anomalies, and represents a promising tool to monitor ecological status, forest changes and restoration, climate change impacts on ecosystems.
Monitoring capacity offered by existing operational Earth observation satellite constellations, opened the way to assessment of critical ecological process in forest ecosystems at unprecedent spatial resolution, and for the understanding of forest dynamics. The presented case studies and methodologies show the capability to identify, map and characterize forest disturbances, highlighting the suitability of the proposed approaches to develop operational services for supporting sustainable forest ecosystem management.
In recent years, an increasing rate of forest disturbances could be observed in several regions of Central Europe. Bark beetles are among the most prominent biotic agents causing these disturbances. At the same time, the ongoing climate change leads to more frequent occurrences of abiotic change drivers like storm events. Affected forest areas can be difficult to map with high accuracy and small temporal lag using Earth observation data, although time series analysis of satellite imagery is known to offer new possibilities towards this goal. However, the varying and often subtle changes in the spectral signal related to biotic and abiotic agents are often hidden in a mixture of phenology, signal artifacts, and noise, leading to rather high false alarm rates if an attempt to map small scale disturbances is made.
It has been shown that structural time series models in combination with the Kalman filter are powerful tools with respect to phenology adaptive signal tracking and noise reduction [1, 2]. This class of stochastic models enables an online decomposition of the input signal into trend and seasonal components, with the most weight being placed on recent observations, thus improving signal tracking. The presence of a wide range of possible signal artifacts is problematic in this regard. Due to imperfections in image pre-processing, there can be un-masked clouds with varying degrees of transparency, fog, haze, snow, cloud shadows, in short everything that causes a deviation in the signal that has nothing to do with the actual ground information to be tracked. One the one hand, these artifacts cannot be treated as Gaussian noise, but on the other hand they are also often not significant enough to be reliably detected by statistical outlier tests. In our experience these artifacts occur much more often than initially expected and reduce the potential of Sentinel-2 imagery for forest monitoring.
One option to improve the signal tracking ability of the Kalman filter in the presence of artifacts is to reliably down-weight or discard these observations altogether. To this end, a novel pre-processing workflow based on the tasseled cap transformation is developed. Taking into account that, for now, the only area of interest is forest and no thematic classification of artifacts is required, the properties of certain tasseled cap components can be used to derive a reliable indication about the quality of an observation. This quality indication is then used to dynamically set the observation noise parameter for the Kalman filter on a per-pixel basis. Improved signal tracking means that the estimated model states (here trend and seasonals) represent the current ground information more closely, which better reflects the principle idea of monitoring. Additional to existing change detection methods, where affected areas are identified by comparing pre-change model forecasts with new observations, it is also possible to take the effects of changes on the model states into account, which might improve the detectability of subtle and slowly progressing disturbances.
The methodological improvements are tested in selected study areas in Austria and Germany for mapping bark beetle infested forest stands and storm damage. We would appreciate the opportunity to share and discuss first results of the improved workflow in a presentation.
[1] Puhm, M.; Deutscher, J.; Hirschmugl, M.; Wimmer, A.; Schmitt, U.; Schardt, M. A Near Real-Time Method for Forest Change Detection Based on a Structural Time Series Model and the Kalman Filter. Remote Sensing, Volume 12, 2020, 3135. https://doi.org/10.3390/rs12193135
[2] Ye, S.; Rogan, J.; Zhu, Z.; Eastman, J. R. A near-real-time approach for monitoring forest disturbance using Landsat time series: stochastic continuous change detection, Remote Sensing of Environment, Volume 252, 2021, 112167, https://doi.org/10.1016/j.rse.2020.112167
The United Nations Framework Convention on Climate Change (UNFCCC) has initiated the REDD+ programme to reduce deforestation and forest degradation. In order to support the REDD+ activities, four different types of remote sensing products are targeted by the research community, i.e. a) forest and forest change maps which are addressing deforestation areas; b) Land Use maps addressing carbon sinks or carbon sources through land cover or land-use changes; c) forest degradation maps addressing carbon sources within forests and d) biomass maps estimating carbon stock directly without emission factors.
In the project REACTIFI, we develop a prototype for a Copernicus satellite data-based forest inventory service for test areas in Uganda. This service shall provide Central and East African REDD+ stakeholders with data to support regional and national Monitoring, Reporting and Verification (MRV) processes. Uganda’s REDD+ forest monitoring is one of the most advanced in Africa. The existing activity data is based on Landsat and Sentinel-2 imagery. The existing forest monitoring system still does not account for intra-annual forest degradation, does not provide the wall-to-wall information necessary for forest management, and does not cover all relevant LULC types, such as agroforestry systems. Our research focuses on the development of innovative methods for improved mapping of forest degradation & agroforestry systems, methods for improved activity data (LULC), and for detecting different forest changes (deforestation, selective logging, afforestation).
Here we present the first project results: new methods for forest change detection based on Sentinel-1 and Sentinel-2 data and LULC products for improved activity data in Uganda. Regarding the near-real-time forest change detection two methods are improved and applied in the test regions using different sensors:
- Sentinel-2 data: We apply a near-real-time change detection method that combines a structural time series model with the Kalman filter. Forest changes are detected based on the cumulative sum control chart (CUSUM) that are used to decide if new observations deviate from model-based forecasts (Puhm et.al, 2020). This approach is more robust to phenology outliers than simple least squares fitting approaches. Workflow improvements focus on weighting the model innovations by a cloud probability estimate, which reduces model update errors.
- Sentinel-1 data: We introduce a near real-time mapping approach based on a moving time window, Sentinel-1 backscatter changes (for VV and VH polarisation) and the coefficient of variation. We use sets of ten ensuing Sentinel-1 scenes to calculate the coefficient of variation and backscatter change. Changes within the forest layers are derived by applying empirical thresholds from known forest disturbance areas. This method allows for a timely detection of forest changes.
LULC products are improved providing more detailed mapping of the thematic classes using the higher spatial resolution of Sentinel-2 (10m) instead of Landsat-8 (30m).
- Improved LULC maps for the following classes: Tropical High Forest, Bushland, Natural Grassland, Wetland, Cultivated and Managed Areas, Build-up Areas, Open Water and Impediments. The annual Land Cover Land Use products are derived using yearly Sentinel-2 time series and the Random Forest classifier. To cover the phenological differences of different land cover and land use types, time features over seasonal periods are generated, covering dry or wet seasons. From these LULC products, we derive several layers like a new forest map with an MMU of 1ha and a detailed Tree Cover Map with an MMU of 0.1ha, which also includes smaller tree-covered patches belonging to agroforestry areas.
- First demo-products for mapping potential agroforestry areas are generated covering smaller tree patches in the context of agricultural areas. Due to their varying percentage of tree cover, they are difficult to assess with Earth Observation (EO) data, especially with Landsat spatial resolution (30m). A higher spatial resolution and temporal density of the COPERNICUS Sentinel systems show high potential for increasing mapping accuracies of agroforestry systems.
Currently, an airborne LiDAR campaign is being planned in Uganda. The plan is to combine the improved activity data with LiDAR 3D models and the national terrestrial forest inventory data. This will allow a much more accurate estimation of biomass values, carbon stocks and land use change specific carbon emission values for REDD+ reporting. The LULC change maps can then be combined with calculated emission factors for an improved wall-to-wall carbon accounting for REDD+ MRV.
A fully tailored and scalable web service, built on open source technologies, visualizes all data and results generated. The web services will allow stakeholders to explore remote sensing data and derived products for their areas of interest, with the main goal to support their regional and national MRV processes. This way, stakeholders can access information on forest status and carbon loss caused by deforestation and forest degradation as well as additional auxiliary datasets such as forest and land cover layers and combine them with their own datasets or compute statistical analysis on the fly. This will enable users to act more quickly and accurately than they can with their current systems.
The poster will present first product examples from forest disturbance mapping and LULC products and explain the workflows in more detail.
References
Puhm, M., Deutscher, J., Hirschmugl, M., Wimmer, A., Schmitt, U., & Schardt, M. (2020). A near real-time method for forest change detection based on a structural time series model and the Kalman filter. Remote Sensing, 12(19), 3135.
The Cerrado has already lost more than half of its natural vegetation due to the expansion of agriculture and pasture. The Cerrado deforestation threatens its biodiversity, as well as Brazil’s water supply, since many headwaters of important rivers are located within this biome. Therefore, it is essential to accurately map deforestation to better understand its main drivers, to plan actions that can avoid it and to promote environmental restoration.
The use of satellite time series and machine learning techniques allow the semi-automatic monitoring of large-scale areas. Usually, for earth observation applications based on optical satellite time series, it is recommended to correct the atmosphere effects and other types of noise that hinder information extraction. However, considering that environmental monitoring systems need to generate quick responses, the atmospheric correction in the processing workflow represents additional computational cost. Deep Learning techniques were designed to take raw data as input and automatically determine the representations needed for classification tasks. Thus, our main objective is to compare the classification of deforested areas using a deep learning method based on time series composed of images with and without atmospheric correction, that is, using Digital Number (DN) and Surface Reflectance (SR) images.
We created four distinct data cubes for Landsat-8 and Sentinel-2 images generated from DN and SR images. We used Land Surface Reflectance Code (LaSRC) and Sen2Cor for Landsat-8 and Sentinel-2, respectively. The study area is located in western Bahia state (northeast Brazil), a deforestation hotspot in the country's main agricultural frontier. We used PRODES data as reference to create training samples and validate the classes “Primary Vegetation”, “2020 Deforestation” and “Past Deforested Areas”. We selected 15,000 training samples from each data cube, taking into account the balance among the classes. We then performed a cross-validation procedure with 10 partitions for DN and SR data cubes samples, and evaluated five training models separated according to the satellite. The first model has a Long Short-Term Memory (LSTM) layer, the second one has two LSTM layers, and so on. In the training process, the same training hyperparameters, obtained empirically, were used for all models. After training, we selected the models whose results showed the best mean overall accuracy considering their 10 iterations to generate the classification maps. Consequently, four maps were created and validated with the reference data, and the results were evaluated using overall accuracy and the class F1-scores.
According to our results, Model 4 and Model 5 had the highest mean overall accuracies with Landsat and Sentinel data, respectively. The classifications generated from DN and SR datasets showed overall accuracies around 90%. The class “2020 Deforestation” presented a lower F1-Score value compared to the other classes, due to confusion with the “Past Deforested Areas” class.
LSTM models showed similar mean accuracy for both SR and DN datasets, considering the cross-validation training accuracies. Therefore, DN time series data can be used in the deforestation detection instead of SR, reducing the computational cost in the processing steps. In the future, other comparisons should be performed among products generated by different algorithms of atmospheric correction, and to further develop the understanding of using optical time series data to detect deforestation in Cerrado, we recommend more study sites in Cerrado and other biomes.
Protected Areas (PAs) are the cornerstone of conservation policies. Forest ecosystems generally play an important role within this context. The aim of this study is to assess the effectiveness of European Protected Areas in conserving forest structural integrity compared to non-protected forests using vegetation structural variables from remote sensing.
For this purpose, we took into account two different PAs classification systems and databases: (i) the World Database on Protected Areas (WDPA) and (ii) Natura 2000.
The WDPA is the most comprehensive global database on protected areas, representing de facto the global standard for PAs. It is a joint project between the United Nations Environment Programme and the International Union for Conservation of Nature (IUCN). PAs are classified according to different categories based on their management objectives and are labeled using seven classes, ranging from strict protection (Class Ia) to multiple use (Class VI). E.g. IUCN classes Ia and Ib stand for strictly protected areas that are excluded from forest management.
Natura 2000 is an European database of protected areas that collects “Special Areas of Conservation” and “Special Protection Areas” designated under the Habitats Directive and the Birds Directive, respectively. Natura 2000 is not classified by management objectives, therefore PAs show different protection levels: many sites are farmed and some are even in urban areas.
In the course of this study, we specifically addressed two questions: i) are vegetation structural variables different between forests located inside or outside of PAs over Europe? ii) what are the differences between forests in strictly protected areas from WDPA and Natura 2000?
To do so, we combined Light Detection And Ranging (LiDAR) information from the Global Ecosystem Dynamics Investigation (GEDI) mission with satellite imagery (Sentinel 1 and 2) and other geospatial data to map forest structural variables - proxies of structural integrity - over Europe. Forest structural variables considered here are: Tree Height, Foliage Height Diversity, Plant Area Index and Plant Area Volume Density.
A thorough data standardization, filtering and sub-sampling procedure was developed to render data within PAs comparable to data over unprotected areas. Bayesian hierarchical models were used to quantify the difference between the variables within PAs and unprotected areas. The analyses were performed on spatial grid level and biogeographical level (using the biogeographical regions dataset of the European Environment Agency).
The results show systematic higher tree height and forest structural complexity within PAs when compared to unprotected areas, indicating that environmental protection has indeed impacts on forest structural properties. Strong differences emerged when the analyses were performed on subsets of the data related to IUCN categories addressing a different degree of protection. Furthermore, an analysis of the correlation between the structural variables and the PAs designation year showed that the latter is important in determining the structural complexity of the forests within PAs.
Our results highlight the importance of LiDAR data in environmental monitoring and conservation assessments, and proves the capability of GEDI data for large-scale assessment of forest structural parameters. These data can complement field-based information and support the mapping of vegetation structural variables derived from LiDAR, greatly contribute in accelerating the monitoring of forests in European Pas and beyond. At a more general level, our results provide robust and quantitative baseline information, first of its kind over Europe about the current status of forest structural properties. This can guide ongoing efforts aimed at assessing the status of forests and the effectiveness of forest-related EU policies towards conservation targets.
Deforestation rates in sub-Saharan Africa have received less attention than those in other tropical regions, despite evidence for increasing rates in the last twenty years (Temgoua et al., 2018). For example, in Cameroon, forests are threatened by foreign investments in large agro-industrial concessions, the expansion of small-scale agriculture, and increasing mining activity (Verheggen et al., 2021). Effective monitoring is challenging because of weak land legislation (Kouba et al., 2020), the relative importance of informal economies, and the difficulty of access in many areas (Verheggen et al., 2021). Earth Observation (EO) offers a promising solution but has known limitations for detecting the specific drivers of deforestation and small-scale degradations, due to the spatial and temporal resolution of the commonly used and freely available datasets such as Landsat.
PlanetScope data has recently been made open access for the tropics through Norway’s International Climate & Forests Initiative (NICFI). This high-resolution dataset (< 5 m) offers new opportunities for identifying the drivers of land-use changes, such as the crop or plantation types, as well as small perturbations that often are the first warning of a future larger impact. Deep learning, especially Convolutional Neural Networks (CNN), have proven to perform classification tasks more accurately compared with traditional machine learning methods such as Random Forest (RF) or Support Vector Machines (SVM). This is the type of approach that was used to design ForestNet, which indeed demonstrated better classification performance for wide land-use categories (i.e. plantation, smallholder agriculture, grassland/shrubland and other) using Landsat data for Indonesia (Irvin et al., 2020).
Using Cameroon as a case study, we explore the ability of PlanetScope data to undertake land-use classification using a modified version of ForestNet with additional classes, in order to understand the specific drivers of land-use changes. This approach focuses mainly on distinguishing crops types and mining activities, as well as revealing small-scale changes that cannot be detected with a lower spatial resolution. To carry out this analysis, we fine-tune the model to enhance the amount of labelled PlanetScope images available, alongside an existing resource of labelled Landsat images. In addition, we explore the need for additional modifications in the ForestNet CNN design to integrate temporal patterns, to assess if the integration of time series EO data can increase the classification confidence. Finally, we assess the feasibility of fusing the findings from EO data with ground data generated such as demographic information and community-led participatory land-use mapping, to show the value of including additional socio-economic inputs for accuracy in classification and identification of how the local context drives land-use change.
References:
Irvin, J. et al. (2020) ‘ForestNet: Classifying Drivers of Deforestation in Indonesia using Deep Learning on Satellite Imagery’ [Preprint]. Available at: http://arxiv.org/abs/2011.05479 (Accessed: 22 October 2021).
Kouba, S. et al. (2020) Securing land rights in Cameroon: what hasn’t worked and what should be done. Available at: https://pubs.iied.org/17752iied (Accessed: 19 November 2021).
Temgoua, L.F., Ajonina, G. and Woyu, H.B. (2018) ‘Land Use and Land Cover Change Analysis in Ajei Upland Watershed Community Forest, North West Region, Cameroon’, Journal of Geoscience and Environment Protection, 06(09), pp. 83–99. doi:10.4236/gep.2018.69007.
Verheggen, A., Beauchamp, E. and Seigneret, A. (2021) Democratizing earth observation to improve transparency in land use governance. CED, Yaoundé. Unpublished.
The European Commission presented its new EU Forestry Strategy for 2030 in July 2021. The strategy supports the role and importance of forests to our economy and society and sets objectives to protect, restore and enlarge the EU’s forests as an answer to climate change and biodiversity loss. Unfortunately, European forests are under increasing stress, induced by changing climate conditions, pests, pollution, diseases and fire risks. “This new EU Forest Strategy aims to overcome these challenges and unlock the potential of forests for our future, in full respect for the principle of subsidiarity, best available scientific evidence and Better Regulation requirements. It is anchored in the European Green Deal and the EU 2030 Biodiversity Strategy…” (New EU Forest Strategy for 2030, 2021).
An essential part of the success of the new EU Forest Strategy for 2030 is the development and implementation of strategic forest monitoring, data collection and reporting. To reach this, regular and cost-efficient reporting is essential, updating progress in forest management priority areas such as biodiversity, forest health, damages and invasive alien species. Effective reporting will also be instrumental for the implementation of the planned payment scheme for ecosystem services for forest owners and managers.
This presentation will demonstrate how modern datasets of optical Earth Observation satellites with high resolution (VHR2) like the PlanetScope can help obtain dense high resolution time series of data that can act as a catalysing force towards meeting important objectives of the new EU Forestry Strategy. These datasets can be crucial for complementing Sentinel-2 capabilities, for example providing advanced capabilities that will increase the detailedness of vegetation cover classification. Furthermore, the ability of PlanetScope data to precisely detect and map disturbances to forest vitality will be presented, based on real examples from Planet’s commercial activities.
The objective of the presentation is to show that remote sensing based tools and technologies are ready and operational to be included in technical solutions for strategic forest monitoring, data collection and reporting. What's more, that these shall be used to support the European logging industry in fulfilling regulation requirements. Detailed EU forest monitoring can be a rich source for frequently updated forest status parameters, which are equally important for the success of commercial activities, as well as protection of biodiversity and the fight against climate change.
India’s forests, which covers over 70 million hectares, support the livelihoods of over 300 million inhabitants. Phenomenal and rapid economic growth in recent decades has adversely impacted pan-Indian forest ecological sustainability, depleted clean water resources and reduced its management efficiency. Climate change is thought to have exacerbated natural hazards with more frequent and intense episodes of flooding and droughts, leading to devastating forest fires, severe insect and pathogen infestations of forests, decreased agricultural productivities; degraded forest ecosystems, watershed quality, human health; and adversely impacted socio-economy of the forest communities. The ultimate objective of this project is to improve forest sustainability in India, focusing on regions in the states of Madhya Pradesh, Himachal Pradesh, Maharashtra, Tamil Nadu, Uttarakhand, and mangrove forests in coastal West Bengal. The physical science component of the USAID project, REmote Sensing for Forest Renewal, Ecosystem Services, and Sustainable Hydrological Management (REFRESH), includes the development of a multi-satellite geodetic and other sensor-based Earth observational system, to holistically and timely quantifying physical and ecological processes impacting pan-India forestry and to monitor natural hazards, water resources, and ecological changes for improved forest management. Here, we present preliminary results of pan-India forest and natural hazards monitoring using a suite of example satellite geodetic sensors, including multi-mission radar/laser altimeters, lidar, synthetic aperture radar, GNSS-reflectometry, satellite gravimetry, high spatiotemporal resolution multispectral imageries acquired by Planet’s Cubesat constellations, and others.
Sustainable and multifunctional forest management requires spatial information with respect to a wide range of social, environmental and economic factors. In recent years, we have witnessed an accelerated development of remote sensing technologies including improvements in both spatial and temporal resolution of products. These developments have led to many new opportunities to derive digital spatial information products useful for forest management tasks. However, while the availability and quality of related remote sensing products has notably improved, the uptake of those products into operational forest management in many EU countries (particularly in central and Southern Europe) is still limited. To better understand current and potential needs, we designed an online questionnaire and distributed it to forestry experts in Austria, France, Germany, Italy, Poland, Slovenia, Spain, Switzerland and the UK.
The survey was developed based on a literature review of current operational forest information products derived from remote sensing data. Forestry experts were asked whether they knew about existing products and whether they match their needs. In addition, the survey provided participants with the opportunity to define their own information products that would be best suited for their work tasks. We also included some background information on the remote sensing products to help participating forest professionals/practitioners learn about current remote sensing capabilities in their field.
The results from a total of 459 responses mirror different management practices and indicate a diverse use of remote sensing products in forestry across the nine examined European countries. The use of spatial products depends on the country-specific forest management system (scale), on the current local forestry problems and on the structure of the forest within the particular country. For example, most of the countries would be interested to have wood volume information available on the stand level, while for some of them also pixel level would be acceptable.
Furthermore, the survey allowed us to compile a set of criteria describing the end-users' needs for forest inventory products in terms of scale of analysis within the forest (tree, stand, landscape), geographical scope of analysis (local, regional, national, continental, global), required spatial resolution, frequency of data delivery, and expected minimal data precision/accuracy, etc.
The results of this survey allow us to better understand the requirements of forest professionals towards remote sensing-based information products. This study thereby helps to bridge communication deficits between the remote sensing community developing spatial information products and end-users in the forestry sector for whom the products are developed. We believe that the findings of our survey will on the one hand help the remote sensing community to develop products that are more targeted to the needs of forestry professionals, and on the other hand to encourage the forestry community to better define their needs regarding remote sensing products and finally increase the uptake of existing information into their operational work.
FUNDING
The work was conducted during the bilateral project INSANE: Innovative spatial information products for forest applications using new satellite technologies, funded by Slovenian research Agency (ARRS) and German Academic Exchange Service (DAAD).
Climate change observed in recent years has a key impact on the dynamics of growth and health of forest stands. Extreme weather events, especially prolonged and repeated droughts, cause damage to forest ecosystems. An additional factor is biological instability of stands created as a result of afforestation of arable land. They are particularly susceptible to the outbreak of diseases, which in consequence may lead to a significant reduction in productivity and even biological degradation of the stand. Therefore, it is increasingly important to identify forests on arable land and to develop a method of detecting and monitoring forest stands sensitive to biotic and abiotic factors. Satellite remote sensing allows reconstructing an image of Earth's surface and monitoring changes in land use and land cover categories over the last 60 years. The aim of our research was to identify the forest growing on arable land and to study the forest dynamics between 1960-2018 using the archive and current satellite imagery. We examined the archival satellite data from the CORONA program, which was a series of American strategic reconnaissance satellites operated by U.S. Air Force. The data were realised to the public in 90s. This data was particularly valuable for preparing a reference background for conducting temporal analysis and identification of forest on arable land. The forest cover mapping based on the CORONA images was conducted using the expert manual interpretation. In case of mapping forests extent in later periods, a modern classification method with the use of machine learning algorithms was applied. The forest classifications were carried out on the basis of a series of satellite data from the Landsat 5, 7 (1990 and 2000) and Sentinel-2 (2018) missions. We used the Deep Learning, specifically the Convolution Neural Network algorithm, for classification of forest extent using Landsat and Sentinel-2 data. Based on the results of the forest mapping on CORONA images and classification of Landsat and Sentinel-2, the forest change maps were prepared for the periods 1960-1990–2000–2018. Then, the performance of quantitative and qualitative analyses of the forest cover and forest changes, and the identification of forests on post agricultural or other non-forest lands was performed. Additionally, an assessment of the usefulness of satellite data used in the project for forest mapping was performed. The analysis presented at this conference were performed over the Regional Directorate of State Forests in Białystok located in the north easter part of Poland. The study area covered over 900 thousand hectares. The forest area within the study area increased from 25.5% to 34.6% in the period 1964/65-2018. In case of forests under management of State Forests National Forest Holding an increase from 19.2% to 22.9% was observed. In the non-state forests, the forest area increased from 7.4% to 13%. The share of post-agricultural forests is equal to 29% of the forest area in 2018, which is around 270 thousand hectares. The obtained results confirmed the high accuracy of the classification models. A spatio-temporal analysis allowed to determine not only the location and area of post-agricultural forests, but also to indicate in which period of time they were established. The CORONA images proofed to the especially valuable for the detection of the forest on postagricultural land. The Landsat 5 and 7 data series from 1990 and 2000 were comparable in quality. The Sentinel-2 data, due to their high spatial, spectral and temporal resolution as well as a wide swath, are particularly valuable for forest and forest change mapping, and monitoring the forest condition at local, regional and national scale.
This research was funded by the General Directorate of State Forests in Poland.
Natural rubber produced from the tree Hevea brasiliensis is used in tens of thousands of products, notably in the manufacture of over two billion tyres annually. Increasing global demand has led to record high rubber prices in the first decade of the millennium. This has led to large-scale and often unregulated land conversion to monoculture rubber all over South-East Asia - and a significant loss of natural forests.
Identifying rubber plantations from space is challenging. Rubber is a tree crop that takes several years to mature. In its initial stages it can easily be confused with other types of crops, and once mature its spectral signature is similar to that of deciduous forest. In addition, over 80% of rubber is produced by small holders, meaning that coarser scale maps will miss fine-scale dynamics. Existing land-use change maps have mainly been produced at a coarse resolution and higher resolution maps tend to only cover subsets of the region. In addition, the temporal resolution of existing assessments is low.
The scarcity of information means that there is low public awareness of the issues. On the contrary, consumers are frequently led to believe that they are buying a renewable, sustainable and carbon neutral product “made of trees”, unaware that these monoculture tree plantations may have replaced forests, lead to net carbon emissions, biodiversity loss and environmental pollution due to heavy application of agrochemicals.
We used a combination of Sentinel-2 and Landsat data to track the spread of rubber and associated forest loss in South-East Asia over the last three decades. The high spatial and temporal resolution delivered particularly by Sentinel-2 allowed us to map fine-scale dynamics with higher precision than previously possible. Our analyses highlight that forest cover loss has been substantial and that there is a tight correlation between forest conversion to rubber and the global rubber price. We also show that conversion to rubber has frequently involved clearing of forest in marginal areas that are not conducive to an environmentally and economically sustainable production - because yields may be low and plantation may fail due to environmental stress or disease. Although these plantations were lucrative while rubber prices were high, they present a high economic risk to small-scale farmers when rubber prices drop. Taking seven years to maturity rubber is a long-term investment, but a single storm, disease, drought or cold spell can quickly destroy an entire plantation. In addition, the global rubber price is volatile and in 2014 the rubber price crashed, meaning that many plantations were suddenly no longer economically viable - a loss-loss scenarios for livelihoods of small-scale farmers, forests and biodiversity.
In summary, while rubber as a renewable resource has the potential to contribute to climate change mitigation and sustainable livelihoods, in practice the crop often has significant negative environmental and social impacts. Our work, which involves a collaboration between several international and local partners, aims to raise public awareness, to underpin certification schemes and to inform wise land use.
Increasing climate extremes lead to the global phenomenon of increased tree mortality. Remote sensing provides great opportunities to track such dynamics. Effective and scalable approaches usually rely on multispectral data with moderate spatial resolution and high temporal resolution that are available at regional and global level (e.g., Landsat, Sentinel-2). There exist various remote sensing approaches that aim to track tree mortality by assessing vegetation status and changes, e.g., vegetation indices, classification, and time series analysis. Such products can either only inform on whether a stand is degraded (e.g., due to stress induced loss of foliage or vegetation health status) or the information on deadwood is only a single observation at a particular point in time. Until now, there is no detailed data product available that can explicitly inform on tree mortality over larger areas and multiple years. However, temporal and spatial information on tree mortality is of the uttermost importance, primarily to understand the extent and dynamics of this phenomenon and secondarily to understand the underlying mechanisms and environmental drivers. The lack of such products can be mostly attributed to reference data scarcity to build models that can extrapolate across large spatial scales and time. In this study, we attempt to close this gap using excessive amounts of Unoccupied Aerial Vehicle (UAV) imagery and a Convolutional Neural Network (CNN) for an automated way of reference data extraction. These automatically extracted reference data will then be used to predict the fractional cover of standing deadwood at Sentinel-2 (S2) pixel level using time series data and Long Short-Term Memory (LSTM) Neural Networks.
Our UAV data is spread over Central Europe and include heterogeneous temperate forests in Germany (Southern Black Forest, Black Forest National Park, Hainich National Park, Dresdner Heide, Hardtwald Karlsruhe, Bretten) and Helsinki, Finland. We used a U-net CNN for the segmentation of standing deadwood in the UAV-based RGB-imagery over approximately 470 ha of forest and achieved an F1 Score of 0.89 (from independent validation). Given the very high spatial resolution (smaller 2 cm) of the UAV-imagery we were able to create high resolution prediction maps of standing deadwood at UAV-scale. These high-resolution products then enabled us to precisely quantify the standing dead tree cover per S2-pixel. We accessed S2-data via the Data and Information Access Services (DIAS) platform and extracted time series of the four 10m-resolution spectral bands and the kernel NDVI (kNDVI). The time series cover a two-year period before the respective UAV acquisitions (2017-06-11 until 2021-10-17). Preprocessing included removal of pixels with kNDVI smaller than 0.075, missing-value interpolation (e.g., due to clouds), averaging to 7 day-intervals, and Savitzky-Golay filtering. The LSTM consisted of two bidirectional LSTM layers, each with 100 hidden units, followed by a fully connected layer with sigmoid activation that calculates the fractional cover of standing deadwood. The final model achieved R² = 0.65, RMSE = 0.169, and MAE = 0.123 in an independent validation.
Our results show that automatically extracted reference data from very-high resolution UAV-based RGB-imagery can be an effective source of training data for large-scale, multitemporal and satellite-based mapping applications. The results also show that with a high amount of detailed training data, accurately predicting standing deadwood cover at S2-pixel level is possible. With the LSTM Neural Network based on time series, the presented approach was not only able to extrapolate the results spatially but also temporally, thus enabling large-scale assessment of tree mortality dynamics over years. Such data products may not only be of great value to track forest loss, but also to understand its spatiotemporal patterns as a function of environmental drivers. Moreover, our study demonstrated that increasing open availability of high-resolution UAV-data will not only facilitate our capabilities for forest assessments on local scales, but also enhance our possibilities to exploit satellite missions for forest monitoring. Our findings therefore emphasize the importance of data sharing initiatives.
Mangrove forests are found on tropical and subtropical coastlines globally. They support a range of ecosystem services including providing coastal protection from storms and erosion, harboring nurseries for fisheries, supporting unique aquatic and terrestrial biodiversity. Mangroves are also one of the most carbon rich ecosystems on the planet, and this high carbon storage capacity, in particular in soil carbon stocks, has highlighted their importance in the global forest and ocean carbon cycles. As such, mangroves play a significant role in climate change adaptation and mitigation leading to their increased inclusion in countries’ Nationally Determined Contributions, national GHG inventories and other climate mitigation mechanisms. Despite the plethora of ecosystem services that they provide, mangrove forests are threatened across their range by direct human driven deforestation as well as impacts caused by climate change.
Aboveground forest biomass and structure, including parameters such as canopy height, are an important variable that increases our understanding of current and future mangrove ecosystem function and the ecosystem services provided by these systems. Spatial patterns of ecosystem structure also help predicting vulnerability to human caused and natural threats.
Mangrove extent, structure and changes vary considerably both on local and regional scales, resulting in large differences in estimates of losses, gains, aboveground carbon stocks and fluxes. Here we present new high resolution global maps of mangrove canopy height derived from multiple spaceborne and global in situ datasets to characterize global trends—both spatial and temporal— in forest structure, biomass and carbon stocks. By combining a global subset of GEDI footprint data over known mangrove areas with the 12 m TanDEM-X DEM product, we generate an updated global mangrove canopy height product. To derive mangrove biomass, we use a database of globally distributed mangrove field plots coupled with airborne Lidar data to generate aboveground biomass and carbon stock models. Finally, we evaluate the changes in mangrove canopy height, biomass and carbon stocks from 2000 to current date. We describe recent trends in mangrove growth and loss from 2000 onwards and the carbon emission as well as policy implications.
Digital surface models (DSMs) derived from spaceborne and airborne sensors enable the monitoring of the vertical structures for forests in large areas. Height is one of the most important attributes to characterize forest stands. Besides being indispensable for estimating forest timber volume, it is very helpful for forest structure analyses, classification, mapping and change detection. Nevertheless, due to the lack of an objective performance assessment for this task, it is difficult to select the most appropriate data source for DSM generation. For a business perspective a good trade-off between costs, effort and efficiency is the key. The objective of this study was to examine the accuracies of Skysat and Pleiades DSMs in a mixed landcover area of northern Italy near Turin. In addition, we paid attention to the level of detail possibly reachable, especially to detect small-sized changes, considering the growing demand of VHR. The accuracy of the DSMs is evaluated by comparison with LIDAR acquisitions to perform a planimetric and vertical shifts measurements with GCPs. In addition, if reasonable, we proceed with a pixel-wise quality assessment. For this study, two ready to use pipelines were compared (Agisoft and ArcGISpro) for Planet tristereo images with different Analysis Ready Data (ARD) from Airbus (with and without LIDAR GCPs as reference). It has been made particular attention to the consistency with the multispectral images. For example, the identification of small objects (i.e. single trees) and the shape of stable objects (i.e. buildings) as reference was a central factor for the quality assessment. The accuracy comparison of the surface models showed that very high-resolution satellite stereo data are a valuable alternative to aerial stereo data for surface modelling if the images are bias-corrected with GCPs. Overall, Airbus ARD are more exploitable for monitoring small changes. The evaluation of planimetric shift is consistent with the company declared value: both for DSM without GCPs (2.83m EC90) and with GCPs (1.52m EC90). Regarding the vertical accuracy, it has been measured a difference of 3.66m LE90 for the best case with GCPs. On the other hand, Skysat DSM are more suitable for less detailed surface dynamics and large scale exploitation. Although this kind of data are more cost-effective, the company is not providing ARD and the instruments and control systems are typically inferior to the larger, more traditional optical imaging satellites.
The fight against illegal deforestation in tropical regions is a key challenge to mitigate climate change. The establishment of reliable forest cover change early warning systems (EWS) is one asset that can complement the MRV system of REDD+. Such EWS rely on dense Earth observation data time series provided notably by American and European initiatives. Such missions and programmes provide both optical and radar data image on a regular basis. To date several tree cover change detection systems have been developed by different programs (e.g., GFW and RADD alerts) relying also on classical machine learning models to produce weekly and annual alerts.
The use of optical imagery hampers early cover change detection due to cloud cover that is frequently present over tropical areas, especially during the wet season. At best, change detection is shifted by several weeks or even months. Radar imagery like Sentinel-1 data, is not impacted by cloud cover, and thus will be used as input data to our models.
For a few years, thanks to the improvements in computing capabilities, deep learning methods have begun to be applied at larger scale, notably in the field of remote sensing in subjects such as land cover and land cover change with promising results surpassing those provided by classical machine learning methods. The objective of our study was to show the contribution of deep learning methods compared to such classical machine learning methods in the context of tree cover change detection in tropical regions with radar images. The Madre de Dios region in Peru was chosen as our study region.
Our framework combined the use of Sentinel-1 time series with deep learning models composed of recurrent and convolutional networks for binary classification. Training, validation, and test datasets were built with agreeing change alerts coming from both GFW and the government of Peru. The classes considered in this study were: tree cover change and a stable tree cover. Four time series of data patches (VV, VH, ascending, descending acquisition conditions) were provided as model inputs with different time series and patches sizes. The patches were extracted from a set of yearly Sentinel-1 images collected over two years (the first year was used to train the model, and the second was used for the evaluation). Then model outputs were put together to produce a change map over our area of interest.
The best results were obtained with a time series of 16 Sentinel-1 images, a patch size of nine-by-nine pixels and a post processing of three images. The user’s accuracy of tree cover change class was 94.3%, for the stable tree cover class accuracy was 98.6%. Producer’s accuracy for tree cover change class was 98.5%, and accuracy for stable tree cover class was 94.8%. The communication will provide further details of the methodology and a comparison of the results with those from a random forest approach and similar studies.
Given the importance of forest as carbon stocks in the climate change dynamic, forest resource monitoring requires increasingly systematic Earth Observations (EO) and, in particular, climate services following the adoption and initial implementation of the Paris Agreement in UNFCCC. Forest loss could have an important impact on rainfall changes. Providing informed adaptation options to climate variability and change through to well-designed climate services is essential for decision makers.
In East Africa, Kenya's forests are rapidly declining due to pressure from increased population, technological innovation, urbanization, human development and other land uses. The main Kenya Water Towers (KWT), and specifically the Rift Valley lakes, have been rising since 2012 and homes, schools, wildlife habitats, and places of worship were submerged leading to displacement of several thousands of people. These floods have been a major cause of concern to the country’s socio-economic development. Population pressures, lack of planning and commensurate increase in the demand for resources have placed undue pressure in these water towers. Environmental degradation and more precisely deforestation have become a common phenomenon in rural landscape (Mwangi et al. 2020).
The purpose of this research is to monitor Land Cover/Land Use (LCLU) changes, notably loss of woody vegetation, of the KWT site from EO data at high resolution, at intervals from 1990, and to investigate the relationship between forest and water changes in the Rift Valley lakes. This study will first derive trends of changes from time series provided by global datasets and then analyse in detail all available satellite images since 1990 (including Landsat, Sentinel-1/2, Planet mosaics) in order to perform detailed LCLU analyses.
Several global datasets could provide a first overview to understanding the impact of Land Cover/Land Use (LCLU) changes and the water cycle. The Tropical Moist Forest (TMF) dataset depicts change information on the humid forest from 1990, with distinction between deforestation and forest degradation (Vancutsem et al., 2021). The Global Surface Water (GWS) dataset quantifies changes in global surface water from 1984 (Pekel et al., 2016). According to the TMF dataset, the area covered by undisturbed moist forests is continuously reducing since 2000 (reduction of about 57%) (Figure 1). Deforested areas increased of about 5% by year from 2000 to 2020 (428 to 851 km²) while degraded forests extend remained rather stable over the same period. A significant increase of forest regrowth was observed between 2000 and 2020 with an area going from 16 to 109 km². These figures show an important dynamic in these moist forests since 2000.
Important water dynamics were observed over the same period 2000-2020 based on the GWS dataset (Figure 2). Permanent water bodies (which are present throughout the period of observation) are separated from seasonal ones (which are only present a part of the year). After some seasonality effect over the period 2000-2004, the water extend of the KWT site remained relatively stable for a few years. Starting from 2010 to 2014, there is an important increase of the permanent water extend and from 2015 to 2020, a small reduction and finally an increase of the permanent water extend.
Based on these first results derived from global datasets showing important forest and water dynamics, the next phase of this study is to provide detailed information on land cover/land use change (LCLU) with a focus on forest cover changes. We propose a step-wise approach to characterise land cover/ land use changes by integrating historical EO data up to 1990 and most recent EO-derived products. The methodology is based on several steps: (1) EO data pre-processing, (2) Phenological classification, (3) Ancillary data collection, (4) Multi-date segmentation and pre-labelling, (5) Visual checking, (6) Accuracy Assessment and (7) Production of the change maps. All these steps are performed within an open-source and free IMPACT Toolbox software (Simonetti et al., 2015) that allow the replicability of the approach by any stakeholder.
The research showed that forest and water have a high dynamics on the KWT site over the past 20 years. Based on local information, it was confirmed that, from a period of 1990 to 2020, catchment forest has been decreasing due to deforestation and the water bodies have irregular dynamics in that, there was rise in the volume of water, attributed to the El Nino rains in 1997 and 1998. However, between 2010 to 2020 the volume has significantly increased due to the unpredictable rainfall shocks. In order to better understand the link between forest and water cycles, the next phase of this study will be focused on developing models including several inputs (LCLU changes, slopes, soil composition, rainfall, evapotranspiration….). The results over the KWT study site will then be generalized at regional scale to support the Regional Centres in providing dedicated Climate Services and support national governments to sensitize the public on the importance of such forests as catchment areas.
References:
Mwangi, K. K., Musili, A. M., Otieno, V. A., Endris, H. S., Sabiiti, G., Hassan, M. A., ... & Kanyanya, E. (2020). Vulnerability of Kenya’s Water Towers to Future Climate Change: An Assessment to Inform Decision Making in Watershed Management. American Journal of Climate Change, 9(3), 317-353.
Pekel, J. F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540 (7633), 418-422.
Simonetti, D., Marelli, A., & Eva, H. D. (2015). IMPACT: Portable GIS Toolbox for image processing and land cover mapping. Publication Office of the European Union, 10, 143497.
Simonetti, D., Simonetti, E., Szantoi, Z., Lupi, A., & Eva, H. D. (2015). First results from the phenology-based synthesis classifier using Landsat 8 imagery. IEEE Geoscience and remote sensing letters, 12(7), 1496-1500.
Vancutsem, C., Achard, F., Pekel, J. F., Vieilledent, G., Carboni, S., Simonetti, D., ... & Nasi, R. (2021). Long-term (1990–2019) monitoring of forest cover changes in the humid tropics. Science Advances, 7(10), eabe1603.
Satellite gravity missions offer a unique geodetic measurement technique that allows the direct observation of mass transport processes in the Earth system. Since 2000, CHAMP, GRACE, GOCE, and GRACE-FO have almost continuously been observing Earth’s mass changes and have improved our understanding of large-scale processes such as the global water cycle, melting of continental ice sheets and mountain glaciers, changes in ocean mass that are closely related to the mass-related component of sea-level rise, which are subtle indicators of climate change, on global to regional scale. This shows that mass transport observations are very valuable for long-term climate applications and validation of climate models. The existing observation record of more than two decades is already closing in on the minimum time series of 30 years needed to decouple natural and anthropogenic forcing mechanisms according to the Global Climate Observing System (GCOS). For long-term studies, it is crucial to keep an uninterrupted record of the mass changes of the Earth, especially for climate-related research such as changes in Total Water Storage (TWS), which was adopted as a new Essential Climate Variable in 2020. Satellite gravity missions are the only measurement technique that allows a direct measurement of the changes in TWS.
Next Generation Gravity Missions (NGGMs) are expected to be implemented in the near future to continue the observation record. The Mass-change And Geoscience International Constellation (acronym: MAGIC) is a joint investigation of ESA with NASA’s MCDO study resulting in a jointly accorded Mission Requirements Document (MRD) responding to global user community needs. These NGGM concepts have set high anticipation for enhanced monitoring capabilities of mass transports in the Earth’s system with significantly improved spatial and temporal resolution.
This study is based on modeled mass transport time series of components of the TWS obtained from future climate projections until the year 2100 following the shared socio-economic pathway scenario 5-8.5 (SSP5-8.5). It evaluates the recoverability of long-term climate trends of the TWS employing closed-loop numerical simulations of different current and NGGM concepts up to a spatial resolution of 250 km (Spherical Harmonic Degree 80). The assumed satellite constellations are GRACE-type in-line single-pair missions and Bender double-pair missions with realistic noise assumptions for the key payload and ocean-tide background model errors. In the interpretation and discussion of the results, special emphasis will be given on the dependence of the length of the measurement time series and the quantification of the robustness of the derived trends, systematic changes, as well as possibilities to improve the trend parameterization.
Sea surface salinity (SSS) is retrieved from SMOS and SMAP L-band radiometers at a spatial resolution of about 50 x 50 km2. In this presentation, we investigate 1) the subpixel SSS variability missed by the satellite measurements at global scale and its contribution to satellite versus in situ salinity comparisons, and 2) the SSS variability in the Senegal-Mauritania region, a region with strong variability related to runoffs, advection and eddies formation, as shown by satellite and in situ measurements and by model simulations.
Traditionally, validation of satellite SSS products is based on comparisons with in-situ measurements at a few meters depth, that are mostly done at a single location and time. The sampling mismatch between the in-situ near-surface salinity and the two-dimensional satellite SSS results in a sampling uncertainty that must be taken into account for the validation of the satellite salinities and their uncertainties. We use a small-scale resolution field (1/12° Mercator Global Ocean Physics Reanalysis) to estimate the expected uncertainty due to the sampling mismatch. Over the global ocean, most of the largest spatial variability of the satellite minus Argo salinity (taken as reference in-situ data) are observed in regions with large sampling mismatch. A quantitative validation of the satellite SSS and its associated uncertainties is performed by considering the statistical distribution of the satellite minus in-situ salinity normalized by the sampling and retrieval uncertainties. This quantity should follow a gaussian distribution with a standard deviation of 1, if all uncertainty contributions are properly considered. This methodology is applied to the merged Climate Change Initiative (CCI version 3) SSS products. We find that, at global scale, the sampling mismatch contributes to ~20% of the observed differences between Argo and satellite data; in highly variable regions (river plumes, fronts), the sampling mismatch is the dominant term explaining satellite minus Argo salinity differences.
We deepen the analysis in a highly variable region, by focusing on the Senegal-Mauritania upwelling region. This region benefits from a special focus at LOCEAN thanks to cooperation between the French and Senegalese teams participating in the Eclairs2 International Joint Laboratory. This upwelling region is characterised by large small pelagic fisheries due to the high biological productivity. A mooring (Melax buoy, located 34 km from the coast) is maintained by LOCEAN off Dakar. The high-resolution CROCO-PISCES model implemented in the region is used to interpret in situ measurements in the region (Melax, Argo, commercial fleet) and to study the physical and biogeochemical processes involved. One of the major uncertainties of the model forcings is river runoff. The CCI version 3 SSS is in good agreement with the Melax SSS, and consistently detects the large interannual variability of the low SSS observed in Fall 2015 and 2016. The interplay of ocean circulation and runoffs variability is studied based on the CROCO simulations.
In view of the above studies and of ongoing discussions within the ESA CCI SSS user community, we will discuss the needs for satellite salinity products accuracy and spatio-temporal resolution.
Extreme weather events such as droughts and heat waves are becoming more frequent. Consequently, negative impacts on ecosystems functioning and services such as the carbon sequestration potentials are likewise increasing. Damages or mortality of trees can be observed globally in the wake of such events. For instance, the droughts and heat waves in Germany in 2018 and 2019 enhanced severe die backs of trees. However, the exact pathways of this compound event remain not well understood and therefore the focus of this study lies on the impact of extreme weather events on German forests. Our innovation is the fusion of remote sensing and forest modeling. With the help of remote sensing imagery, we want to detect and quantify drought/heat induced changes of forest characteristics stress. Additionally, with the process-based forest model FORMIND we will investigate the resilience of various species compositions to different extreme weather events. Combining these two methods we will achieve a deeper understanding of the consequences of climate change on forests today and in the future. However, the fusion of remote sensing (especially passive optical satellite data) and process-based forest modeling is challenging. One of our findings is, that the success of this fusion strongly depends on the chosen radiative transfer model (RTM) and the knowledge about the chosen forest. For this reason, with the RTM mScope and the forest inventory data we evaluate different fusion concepts. In general, our results show that a fusion of remote sensing and forest modeling is possible and promising to provide a lot of information about the health of a forest. This fusion will be extended to other European forests. We aim on a general method which also can be used with a correct parametrization for other forests worldwide. With this, predictions of compositional changes of forests due to climate change might be possible and will allow us to adapt our forests better to future extreme weather events.
Observations related activities are central in coupled assimilation and Earth system approach developments. Coupled assimilation aims at providing balanced initial conditions at the interfaces of coupled Earth system models. These models account for several components, including atmosphere, land, ocean, sea ice and wave, and they represent their complex interaction processes. Observations that lie at the Earth system interfaces are highly relevant for coupled assimilation as they depend on more than one component. So, they can consistently and simultaneously inform on atmosphere and surface conditions, for example. These observations include conventional observations (e.g. snow depth, and screen level temperature and humidity reports), and satellite observations such as scatterometer data or surface-sensitive radiances from passive infrared and microwave radiometers.
In this presentation we introduce coupled assimilation activities conducted in support of seamless Earth system approach developments for Numerical Weather Prediction and climate reanalysis. For operational applications coupled assimilation requires to have reliable and timely access to observations in all the Earth system components and it relies on consistent acquisition and monitoring approaches across the components. We discuss challenges of surface sensitive observations assimilation, and we show ongoing forward operator and coupling developments to enhance the exploitation of interface observations over land and ocean surfaces. We present plans to use new and future observation types from future observing systems such as the Copernicus Expansion missions.
The estimation of carbon and energy fluxes gives us insights into ecosystem-climate interaction and helps make prognoses for fluxes in a future climate. For this purpose, physically-based models provide a robust method for upscaling measurements from the flux tower scale to a larger area. Currently, Earth Observation satellites deliver data to constraint algorithms of vegetation structure parameters retrieval and climate models for meteorological variables estimation. However, vegetation functioning parameters, such as the maximum carboxylation capacity parameter (Vcmax) and the parameters for the stomatal response are still taken from look-up tables of plant functional types (PFT). This study aimed at quantifying the accuracy of simulations with the Soil Canopy Observation of Photosynthesis and Energy fluxes (SCOPE, Yang et al. 2021) model of gross primary productivity (GPP) and energy fluxes (net radiation, latent, sensible, ground heat flux) across Europe, constrained by weather, remote sensing of vegetation structure, and PFT specific vegetation functioning parameters.
Vegetation structure parameter leaf area index (LAI) was retrieved from Sentinel-3 top of atmosphere radiance measured with Ocean and Land Colour Instrument (OLCI). Meteorological data were taken from in situ observations and the ERA5-Land dataset of the European Centre for Medium-Range Weather Forecasts (ECMWF) distributed along with the validation data: a dataset of flux tower measurements in the so-called Drought-2018 ecosystem eddy covariance flux product provided by ICOS (Drought 2018 Team). The values of Vcmax taken from Groenendijk et al. (2011), Kattge et al. (2009) and Norton et al. (2019) were evaluated against the default SCOPE value of 60 µmol m-2 s-1. In addition, seasonally dynamic Vcmax (as a function of LAI) was used.
The results demonstrate high uncertainty of flux simulations: from 16% to 36% of the mean annual values for GPP and from 36% to 46% for evapotranspiration. The default seasonally static Vcmax outperformed all PFT-specific cases for GPP with RMSE of 283 g C m-2 yr-1 (R2 0.75), for ET the best performance (RMSE 139 mm yr-1, R2 0.46) was achieved on Groenendijk et al. (2011) mean per PFT values of Vcmax and BallBerrySlope. Ecosystems in the Mediterranean climatic zone, savannahs in Spain and evergreen needleleaf forest in Southern Italy, remain challenging for SCOPE due to the absence of the available water constraint on the energy balance.
Overall, this work validates carbon and energy balance parts of SCOPE model across European eddy covariance sites. The results pave the way to the operational usage of SCOPE for ecosystem flux mapping, especially in view of the release of the faster SCOPE2.0 version.
The project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 721995.
References
1. Groenendijk, M., A. J. Dolman, M. K. van der Molen, R. Leuning, A. Arneth, N. Delpierre, J. H.C. Gash, et al. 2011. “Assessing Parameter Variability in a Photosynthesis Model within and between Plant Functional Types Using Global Fluxnet Eddy Covariance Data.” Agricultural and Forest Meteorology 151 (1): 22–38. https://doi.org/10.1016/j.agrformet.2010.08.013.
2. Kattge, Jens, Wolfgang Knorr, Thomas Raddatz, and Christian Wirth. 2009. “Quantifying Photosynthetic Capacity and Its Relationship to Leaf Nitrogen Content for Global-Scale Terrestrial Biosphere Models.” Global Change Biology 15 (4): 976–91. https://doi.org/10.1111/j.1365-2486.2008.01744.x.
3. Norton, Alexander J, Peter J Rayner, Ernest N Koffi, Marko Scholze, Jeremy D Silver, and Ying-Ping Wang. 2019. “Estimating Global Gross Primary Productivity Using Chlorophyll Fluorescence and a Data Assimilation System with the BETHY-SCOPE Model.” BIOGEOSCIENCES 16 (15): 3069–93. https://doi.org/10.5194/bg-16-3069-2019.
4. Yang, Peiqi, Egor Prikaziuk, Wout Verhoef, and Christiaan van der Tol. 2021. “SCOPE 2.0: A Model to Simulate Vegetated Land Surface Fluxes and Satellite Signals.” Geoscientific Model Development 14 (7): 4697–4712. https://doi.org/10.5194/gmd-14-4697-2021.
5. Drought 2018 Team and ICOS Ecosystem Thematic Centre: Drought-2018 ecosystem eddy covariance flux product for 52 stations in FLUXNET-Archive format, https://doi.org/10.18160/YVR0-4898 , 2020
Understanding variations in the ocean heat content is tightly linked to understanding geophysical interaction of the global energy cycle with the regional water cycle. Changes in the ocean water column can be estimated with complimentary observations of sea surface height, ocean bottom pressure, temperature and salinity. However temperature and salinity profiles from Argo suffer from spatio-temporal sampling problems, and some signals are not well captured, e.g. in the deeper ocean below 2000m, around the boundary currents, in the Arctic or in the shelf/coastal regions which are not frequently visited by floats. Furthermore, using satellite gravimetry and altimetry observations, separate contributions to the global sea level can be estimated, but a regional solution is more challenging.
In order to improve temporal and spatial coverage of oceanic temperature and salinity estimates as well as regionally varying sea level contributions we combine space geodetic data together with Argo profiles in an inversion framework. Jointly processing radar altimetry, Argo and data from the Gravity Recovery and Climate Experiment (GRACE), benefits from using the individual datasets, and produce consistent observation based estimates of temperature, salinity and sea surface height changes. Solving the inverse problem for temperature and salinity, forward operators are formulated linking the satellite observations to temperature and salinity at depth. This is done by (1) parametrization of temperature and salinity profiles over the full depth of the ocean with B-splines to reduce dimensionality while keeping complexity of the data intact and (2) linearization of the integrated density from parameterized T/S curves. We apply forward operators in the East Indian Ocean to resolve for sea surface height, ocean bottom pressure, temperature and salinity, and assess the regional importance of these factors. We explore stability of a joint inversion using these forward operators in combination with along-track radar alitmetry, GRACE and temperature and salinity by exploring a closed-loop inversion.
Seasonal snow cover of the Northern Hemisphere (NH) is a major factor in the global climate system. The seasonal snow cover greatly influences surface albedo and, thus, the Earth’s energy balance, which makes snow cover an important variable in climate models. Additionally, snow cover significantly affects the hydrological cycle at high latitudes and in mountainous regions. Previously, substantial uncertainties have been reported in NH snow water equivalent (SWE) estimates. However, our knowledge of the NH SWE has recently improved considerably with new bias corrections which reduce the uncertainty of the SWE estimate integrated over NH significantly. With more accurate SWE estimates, the analysis of the climate models’ ability to describe the snow cover is far more meaningful and reliable.
In this study, we have evaluated NH SWE in CMIP6 (Coupled Model Intercomparison Project Phase 6) models with observation-based SWE reference data north of 40° N for the period 1982-2014 and analyzed with a regression approach whether model biases in temperature (T) and precipitation (P) could explain the model biases in SWE. We analyzed separately SWE in winter and SWE change rate in spring. For SWE reference data, we used bias-corrected SnowCCI data for non-mountainous regions and the mean of Brown, MERRA-2 and Crocus v7 datasets for the mountainous regions. The SnowCCI SWE data are based on satellite passive microwave radiometer data and in situ snow depth data. MERRA-2 is an atmospheric reanalysis and Brown and Crocus v7 are snow models driven by ERA-Interim reanalysis.
The analysis shows that CMIP6 models generally overestimate SWE, but large variability exists between models. Especially in winter, the SWE model biases are mainly positive, while in spring, there are large differences in snow melt rates between the models. In winter, P is the dominant factor causing SWE discrepancies especially in the northern and coastal regions. T contributes to SWE biases mainly in regions, where T is close to 0℃ in winter. In spring, the importance of T in explaining the snowmelt rate discrepancies increases. This is to be expected, because the increase in T is the main factor that causes snow to melt as spring progresses. Furthermore, it is obvious from the results that biases in T or P cannot explain all model biases either in SWE in winter or in the snowmelt rate in spring. Other factors, such as deficiencies in model parameterizations and biases in the observational datasets, also contribute to SWE discrepancies.
To obtain meaningful representations of large ice masses like ice sheets and glaciers, the motion of ice-ocean interfaces is one key driver of ice dynamics. Recent advances in the field of machine learning and big data allow us to extract the transition of ice and ocean from optical satellite imagery with high temporal resolution and accuracy. With the given geometric information of frontal evolution, ice-sheet models of Greenland could be improved as parametrizations of moving ice fronts are verifiable. Which components influence the position of the ice front, and how could we incorporate ice front positions into ice sheet models? The viscous ice flow to its margins and the mass loss due to iceberg calving and frontal melt result in moving ice fronts over time. Ice sheet models include ice physics and provide ice flow velocities dependent on observed surface velocities considered through inverse modelling. The motion of the glacier is additionally highly influenced by the hydrological system beneath the glacier. Until now, no physical-based calving law exists. We have to parametrize calving as well as frontal melt. However, changing ice front positions leads to stress changes and influences the dynamics of glaciers. This interaction between ice loss and ice flow is one challenge in combining ice front observations and modelling. We present two ways to include ice front positions into the Ice-sheet and Sea-level System Model (ISSM). In a first approach, we show that ice front positions are definable by moving boundaries with a level-set method. Solving a differential equation for the new unknown field of ice or no ice leads to the motion of frontal positions as observed by satellite images but without a physical basis of ice flow dynamics. In the second approach, we show parametrizations of calving and frontal melt in which we adjust quantities that the resulting ice front positions fit the observed temporal evolution. The parametrization due to calving is one order of magnitude higher than the one due to ocean melt. In the end, we want to iterate between both approaches to come up with an accurate parameterization of ice loss due to calving and melting those fit observations and incorporate the physics of glacier flow.
Sea ice induced seasonal modulation in tidal constituents has been the focus of many recent studies. To further study this seasonal modulation, hydrodynamic tidal models need to include the effects of sea ice on tides in the modelling. Some models couple a 3D tidal model to sea ice models and evaluate the seasonal modulation. But it is not always possible to establish a two-way coupling, especially for 2D hydrodynamic tidal models. Such models can then benefit from an efficient parameterization of the effect of sea ice on tides.
In our previous works, we have proposed the parameterizations to model the effect of sea ice on tides in our Regional Tidal model for the Canadian Region. These parameterizations include addition of ice-water drag and modelling the internal stress of drifting sea ice in the hydrodynamic model equations. Consequently, it was assessed that sea ice-water drag coefficient and a cumulative sea ice-water viscosity are two sensitive parameters which significantly affect the tidal results. Hence, these parameters need tuning to obtain accurate results.
Here, we aim to simultaneously obtain optimised values of the two parameters using altimetry derived seasonal modulation of the M2 tide. We will employ the DUD (Doesn't use derivative) algorithm for this purpose. Finally, the performance of the tuned model is assessed by comparison to seasonal modulation observed by the tide gauges from the Canadian Hydrographical Service and from GESLA3.
The Year of Polar Prediction (YOPP) is a Flagship Activity of the WMO’s WWRP Polar Prediction Project, a decadal effort to promote cooperative international research enabling development of improved weather and environmental prediction services for the polar regions and beyond, on time scales from hourly to seasonal. The YOPP Supersite-Model Intercomparison Project (YOPPsiteMIP) [1] is designed to facilitate process-based validation of numerical weather prediction (NWP) models during YOPP Special Observing Periods (SOPs). One key component of YOPPsiteMIP are the Merged Observatory Data Files (MODFs) being created for several well-instrumented polar locations. The goal is to assemble data from all the sensors at given location into a single netCDF file that is as similar as possible to the corresponding model output. MODFs are being designed in collaboration with interested scientists at NWP centers and will contain, to the extent possible, high-resolution observations of the same geophysical variables that will be provided in the model output data files produced by participating NWP centers for those same locations.
How do we ensure that multivariate observational files created by researchers at different institutions are as similar as possible to each other, in terms of nomenclature, metadata, and structure, while also being comparable to model-output files created by scientists at multiple NWP centers? The YOPP Verification Task Team produced a white paper in which appeared a series of tables listing model variables and the measurements against which they could be compared. These tables evolved into a published tool providing guidelines for creating MODFs with consistent variable names and metadata. Dubbed the H-K Table, it is a living document available in both human- and computer-readable formats. It relies on standards and conventions commonly used in the earth sciences, including netCDF encoding with CF Conventions. The prescribed metadata make data provenance clear and encourage proper attribution of the observations. The H-K Table enables observational groups to create modeller-ready MODFs using current requirements and their software of choice and gives NWP partners guidance in creating comparable model output files.
[1] https://www.polarprediction.net/key-yopp-activities/yoppsitemip/
A reliable retrieval of the actual rain fallen on a given domain is not an easy task, due to its temporal and spatial variability, but its importance is paramount for meteorology, hydrology and for the effects on human lives and the environment. Nowadays there are different solutions to measure rainfall, directly or indirectly, with respect to the reference raingauge method. At present an additional problem is given by the change of the rainfall regimes, almost at every latitude, often with dramatic effects, and with a complex connection to the climate change, in large part to be still understood.
Among the emerging methods for rainfall estimation, a specific interest is in the measurements called ‘opportunistic’, because they provide a chance to augment information without adding new infrastructures, so with clear cost advantages, but generally with larger errors than purposely designed rainfall measuring systems. Therefore some smart efforts in devising proper processing are needed to extract the maximum of the geophysical information they can provide. The use of microwave links is among these methods, since they bring information on rainfall rates along their path, through the signal attenuation caused by raindrops. So, broadcast telecommunication satellite signals can be used at the purpose, although rainfall retrieval poses non trivial problems for instance related to the definition of the intercepted precipitation volumes and inhomogeneity of scatterers along the path. The advantages are related to their worldwide availability and to the easiness of data acquisition, natively centralised when using two ways communication receivers. NEFOCAST, a research project funded by the regional administration of Tuscany (Italy), exploited this feature through two-way (transmit-receive) devices named SmartLNB (Smart Low-Noise Block converter), that provide average measurements along quasi-parallel non-nadir paths pointing to an Eutelsat telecommunication geostationary satellite. In order to retrieve ground precipitation, some ancillary information is needed on the structure of the intercepted rainfall system. An experimental network of SmartLNBs was deployed in Italy (namely in Florence, Pisa and Rome), including co-located raingauges and radar measurements, for cal/val objectives.
About the developed rain intensity estimation technique, assuming an ideal homogeneous rain layer, the specific MW signal attenuation (k, expressed in dB/km) is related to the instantaneous rain rate (R, in mm/h), by a power-law in the form k=aR^b, where a and b are coefficients that depend on the carrier frequency, that, for broadcast satellites, is typically in the range 10–40 GHz, and on the polarization. A given telecommunication satellite in geostationary orbit (about 36 000 km over the equator) is connected with a set of ground receiving terminals (GTs) via a slanted path which intercepts the precipitation. Each GT yields estimates of the received signal-to-noise ratio, at one sample per minute (or even at higher rates). The start of a rain event produces a sudden drop of the Signal-to-noise ratio (SNR) value which is detected by the algorithm. Then the rain-induced SNR loss is evaluated by comparing the current "wet" SNR reading with a reference level relevant to "dry" conditions. An innovative (patented) algorithm exploits the link geometry and a novel tropospheric model to derive the specific rain attenuation and, eventually, the associated rain rate. When the observed "wet" SNR reaches the "dry" reference, the end of the precipitation is declared.
The high rate of measurements provided by the SmartLNBs suggested to approach the retrieval of bidimensional spatialised rainfall maps from along-path averaged rain rates, similarly to a trajectory assessment in a phase space, using an ensemble Kalman (EnKF) filter methodology. Actually such measurements are processed using a spatio-temporal data assimilation framework based on an EnKF, that integrates such peculiar type of observations with a simple storm advection model driven by Atmospheric Motion Vectors (AMV) from Meteosat Second Generation, while initial and boundary conditions are obtained from the MSG Instantaneous rain rate products.
In this work, we present the measurement concept, the signal processing algorithm and the method to retrieve the rainfall fields, applied on some significant synthetic studies and on a real one. The real one consists of measurements from 8 SmartLNBs available in an area of about 1000 km^2 surrounding the city of Dortmund (North Rhine-Westphalia, upper basin of Ermscher river), that were used to obtain gridded rainfall fields at fine temporal (5 minutes) and spatial (1km) resolution for the heavy rain event of July 13 and 14, 2021. Such event has been one of the most relevant in Northern Europe in the last years, causing fatalities and damages. Although the Dortmund area was less severely hit compared to other areas in North Rhine-Westphalia, it was interested by intense rainfall especially on July 14. Only one rain gauge is present in the area, so the availability of measurement from other types of sensors may be very useful to detect spatial patterns and localized phenomena. The resulting maps were compared with those provided by the RADOLAN maps (radar based quantitative precipitation products) by the German Meteorological Service. The comparison shows the potential benefit of using the MW-link measurements to improve measurements on a sparse raingauge network, enabling a more detailed spatio-temporal reconstructions of the rainfall fields.
However now we are aware that such a benefit could be dramatically enhanced, with some specific software and hardware upgrading of the measuring system. This is the focus of INSIDERAIN, an ongoing project following NEFOCAST, that aims at targeting some main upgrades, overcoming the main limitations intrinsic to the system architecture, emerged during the first experimentations. First of all, measurements of SNR, namely Es/N0, can be collected only when the satellite terminal transmits them to the satellite hub. This means that for satellite terminals used in a non-continuous way (i.e. not communicating over satellite with continuity) these measurements can be missing for long time periods. Another reason for unavailability of measurement data is intrinsic to heavy rain, which causes service outage. Satellite terminals in fact are not able to transmit when they are not able to demodulate the signal received from the satellite network. This means that no Es/N0 measurement will be available in the satellite hub when the link is more perturbed (which is the condition we would like to identify). In order to overcome this remarkable limitation, a store & forward approach for data collection has been implemented. This means that, when the link between the satellite terminal and the hub is not available, signal measurements are collected by the satellite terminal and stored in a local cache, to be transmitted to the hub when the link gets available again. This new feature allows to extend quite much the range of measurable rain rate, without any other change in the overall network architecture. But the most ground-breaking objective of INSIDERAIN is in the design and prototyping of a brand new receiver, capable of measuring rain attenuation affecting several satellite signals, that are simultaneously received from multiple geostationary platforms, seen from different directions. This receiver makes use of standard broadcasting satellite transmissions as signal source, so there is no need for the implementation of a dedicated satellite service. Data are returned to the service centre using any available return link (e.g. LAN, Wi-Fi, 3G/4G/5G, etc.). The receiver performs sequential measurements of the signal transmitted by broadcasting satellites operating in Ku band. The use of a toroidal dual reflector antenna, capable of hosting up to 16 different Low Noise Block (LNB) converters, with a total angular separation of up to 40 degrees, makes this new receiver acting as a multi-directional rain-rate probe. In order to receive from multiple satellites, the system makes use of an electronic switch to select each LNB, then the received signal is measured and delivered to the service centre, which estimates the attenuation and then the rain rate. The new measuring system is capable to detect precipitation simultaneously from many different directions, thus increasing the measuring reliability and enabling the identification and correction of transient satellite-specific effects. In addition it enables the rain-rate measurement up to a threshold 4 times higher than the previous NEFOCAST device.
The same EnKF algorithm is applied to generate rainfall maps from all these different sensors and including also raingauge measurements eventually deployed in the target domain. It is also capable to account for the information contained on the link outage and the available tests show very promising performances for the overall measuring system, also in the capability to address different spatial scales, depending on sensor distribution and density.
Extreme weather disasters can deteriorate people’s lives. There already are lots of research on disaster detection by remote sensing, which depends on its wide-coverage and timeliness. However, electricity power outages are still difficult to obtain by remote sensing data, because traditional remote sensing cannot get radiation information from the earth at night time. These days, the Day and Night Band (DNB) of Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-Orbiting Partnership (Suomi-NPP) and National Oceanic and Atmospheric Administration - 20 (NOAA-20) satellites provide a good opportunity to resolve this problem due to its low light detection capabilities. High spatial and spectral resolution night time images were achievable by its high gain stage(HGS) with extreme sensitivity, while radiometric calibrated by its low gain stage(LGS) with solar diffuser near terminator. Power outages caused by the disaster can be estimated by using the DNB night time data before and after a disaster. A “bomb cyclones” equivalent to a category 4 hurricane has struck in Cresent City, California which lead to a massive area outages on October 22, 2021. Base on the catastrophic event mentioned above, this study explores the use of DNB in quantifying light power outages. In order to get accurate results and outages recovery trend, a one-month average prior to and fifteen-days post storm event study-area DNB images were collected to calculate before and after storm regional radiance ratio change. After lunar and atmospheric correction, the final result shows a reasonable agreement between DNB radiance data and power company recovery survey data.
Atmospheric aerosols play a significant role in climate change (direct and indirect effects). Environmental rules, laws and policies have been established around the world to reduce their concentrations and their climate impact as much as possible.
The AERONET network provides computations of the Aerosol Radiative Impact and the Aerosol Radiative Efficiency. Both parameters are derived from measurements acquired during the almucantar procedures and using a radiative transfer code. From these instantaneous radiative impacts/efficiencies, we derived the Aerosol Daily Radiative Impact (ADRI) and efficiency (ADRE) for the same reference to be able to intercompare each site (+45o North latitude and on March). While the amplitude of the ADRI is directly relied to how much the aerosol impacts the climate, the ADRE, as a relative parameter, is more relied to the intrinsic nature of the aerosol, i.e., its chemical composition, and at the end to its capability to interact with the climate.
Thus, we decided to focus on the yearly trend of Aerosol Daily Radiative efficiency (ADRE) within the atmosphere (ATM) and at the bottom of the atmosphere (BOA). These efficiencies trends give an idea of our change of industrial habits (in term of aerosol emissions).
To do this, we selected the decade before the COVID-19 pandemic 2008-2019. From the AERONET, we were able to determine the yearly trend for 259 sites over the world. These sites didn’t give a at the global scale, but it gives a good idea.
Overall, with our sites, a first study on the Aerosol Optical Thickness shows a deceasing of about 1%/year (with a decreasing of 2.5%/year of the absorbing part). Nevertheless, the Aerosol Single Scattering remains stable.
For ADRE_ATM and ADRE_BOA (Figures 1), we found a small decreasing of about 0.5-1.0%/year and about 0-0.5%/year respectively, meaning no real changes. Hopefully, a geographical and economic zones analysis shows local trends of ADREs which can reach few % a year (apparently when policies on aerosol emissions apply). Nevertheless, and except these places, a stable yearly trend of ADREs indicates no significant changes in the aerosol type, and then in the matter of anthropic emissions…
Global biogeochemical ocean models are invaluable tools to examine how physical, chemical, and biological processes interact in the ocean. Another indispensable resource is ocean-color properties derived from satellite, which provide observations of the surface ocean with unprecedented coverage and resolution. While the potential for using ocean color products to evaluate the skill of biogeochemical models is significant, there are challenges when comparing model output and analogous satellite products (e.g. chlorophyll-a). Most approaches are based on point-by-point comparisons in space and time where spuriously large errors can occur from small spatial and temporal mis-matches, whereas global statistics provide no information on how well a model resolves processes at regional scales. Here we suggest an alternative methodology for robust comparisons between models and satellite data. The focus is to compare probability density functions of different properties within and between different eco-regions to evaluate how the model resolves physical and biological processes. Differences in distributions of Chl concentration provide information on matches and mismatches between models and observations. In particular, mismatches help isolate regional sources of discrepancy, which can lead to improving both simulations and satellite algorithms. We leverage recent advances where ecosystem models simulate the spectral properties of phytoplankton and remote-sensing reflectances, bringing model outputs closer to what the satellites observe, once atmospheric correction has been applied. This use of radiative transfer theory in the model help mimic remotely-sensed products facilitate model-data comparisons of optical properties of the ocean. Finally, we use Earth Mover's Distance (EMD) to quantify the differences in probability distributions and show that EMD can be an effective scalar measure for summarizing the full distributional difference between two data sources.
Weather forecasts support society in multiple ways, from resource management to minimizing hazards related to high impact weather. Accurate predictions can help to reduce impacts across various sectors especially during extreme events and, more importantly, they can save lives. The quality of weather forecasts has improved considerably in recent decades as models have become more complex to represent more processes and to assimilate more comprehensive Earth observation data. This complexity presents a challenge for pinpointing weaknesses in the forecast model’s process representations, which is needed to support a continuation of the trend towards improving forecast accuracy. In this study, we used a comprehensive set of observation-based ecological, hydrological and meteorological variables to study their potential for explaining the forecast errors in temperature at two meters from the ECMWF S2S forecast dataset. For this purpose, we computed Spearman correlations between each considered variable and the forecast error across the globe. The results suggest that circulation-related variables such as wind and pressure differences are most strongly related to forecast errors, indicating that a better representation of these processes and variables in the forecast model has most potential to improve temperature forecasts across the globe. At the same time we found particular regions and seasons in which variables are more strongly related to forecast errors, for instance: i) during the growing season in Central Europe, central Africa and Northern South America, the vegetation state and soil moisture variables are relevant, and ii) meteorological variables such as solar radiation, precipitation and sea surface temperature, are relevant in Asia and Eastern Europe during boreal summer and autumn. Additionally, we found that the variables anomalies showed an increased potential for improving the predictions at longer forecast lead times, whereas the absolute values showed a decreased potential. This highlights that information on ecohydrology beyond the mean seasonal cycle can be informative for temperature forecasts. Our findings of variables related to forecast errors can inform the development of forecast models and respective data assimilation, particularly as most of the considered variables are available at near real-time.
The Far-Infrared Outgoing Radiation and Monitoring (FORUM) mission will provide an unprecedented opportunity to verify and potentially improve the ability of global circulation models (GCM) in simulating the Outgoing Longwave Radiation (OLR). Accurate simulation of the OLR is in fact crucial to better constrain the different radiative feedbacks. Planned for launch in 2027, FORUM will measure spectrally resolved radiances of the Earth’s emission spectrum at the top of the atmosphere (TOA) from 100 to 1600 cm-1 filling the existing observational gap of the far-infrared region, from 100 to 667 cm-1. In addition, FORUM will fly in loose formation with IASI-NG, which will continue to cover the middle infrared range of IASI from 675 to 2760 cm-1.
In anticipation of FORUM measurements, we aim at comparing IASI existing observations to synthetic radiances extracted from the EC-Earth GCM (version 3.3.2), a recent European model based on ECMWF’s Integrated Forecasting System (IFS) for the atmosphere–land component and the ocean model NEMO, including sea ice (LIM2) and land surface components (1).
Tuning operations linked to the sub-grid physical parametrization in climate models guarantee a very good global agreement between simulated and observed (CERES - EBAF 4.1) OLR broadband fluxes, but the study limited to the energy fluxes integrated over the whole Earth emission spectrum makes difficult the detection of biases and the identification of potential spectral compensation errors in climate models. Conversely, comparing simulated to observed spectra, allows to point out the potential model criticalities in given spectral bands which contain the signatures of specific climate variables.
In order to extract simulated spectra from the climate model, EC-Earth has been used along with the Cloud Feedback Model Intercomparison Project (COSP), a simulator package able to map the model state into synthetic observations from different satellite-borne active (CloudSat (radar) and CALIPSO (lidar)) and passive (ISCCP, MISR and MODIS) (2) sensors. We have further developed the package by implementing inside COSP the radiative transfer model σ-FORUM, a monochromatic code able to reproduce synthetic radiances in the Far-Infrared and Mid-Infrared regions compatible with future FORUM and existing IASI observations (3).
Due to the high computation cost of the operation, the efficiency of the EC-Earth model equipped with the new COSP module has been improved by modifying the σ-FORUM original code structure and by reducing the original resolution of the model.
Therefore, on-line simulations provided by EC-Earth equipped with the new COSP + σ-FORUM module have been performed in clear-sky conditions with prescribed sea surface temperature and sea-ice cover every 6 hours, over a timeframe consistent with the availability of IASI data.
Meanwhile, the IASI clear-sky radiance climatology has been built starting from the METOP-A L1C data provided by EUMETSAT through the European Weather Cloud (EWC) infrastructure.
Systematic comparison between observational data and model outputs have been performed over spectral bands of 10 cm-1 on a global and regional scale by distinguishing the types of surface (land, sea) of the emitted radiances in order to address the existing biases in different spectral bands to specific climate variables. The long term analysis shows a warm bias of the climate model in the CO2 absorption band, which represents a strong evidence of model bias in the upper-troposphere and stratosphere. Moreover, a warm bias in the roto-vibrational water vapour bands and a cold bias in the atmospheric window over land occur in the model suggesting the existence of spectral compensation errors in the computation of the broadband flux.
The comparison between nadir radiances simulated by the EC-Earth climate model and the climatology built from ten years of IASI observations represents a very high confidence test for the direct verification and improvement of the GCM.
The same approach could be extended to other climate models and, in the near future, it will involve FORUM measurements for a comprehensive analysis of the climate model ability in reproducing the whole Earth emission spectrum.
References
1. Hazeleger, Wilco, et al. "EC-Earth: a seamless earth-system prediction approach in action." Bulletin of the American Meteorological Society 91.10 (2010): 1357-1364.
2. Bodas-Salcedo, Alejandro, et al. "COSP: Satellite simulation software for model assessment." Bulletin of the American Meteorological Society 92.8 (2011): 1023-1043.
3. Amato, Umberto, et al. "Corrigendum to: "The sigma-IASI code for the calculation of infrared atmospheric radiance and its derivatives" [Environmental Modelling & Software (2002) 17 (7) 651-667]." Environmental Modelling and Software 18.1 (2003): 97
Land-use and land-cover changes (LULCC) are an important human forcing on climate, especially at the regional to local scale. However, annual LULCC have not yet been accounted for in coordinated downscaling experiments with regional climate models (RCMs) for generating ensemble of high-resolution climate change simulations for historical and future time periods. Within the framework of the WCRP CORDEX Flagship Pilot Study Land Use and Climate Across Scales (FPS LUCAS), the LULCC dataset LUCAS LUC (Hoffmann et al. 2021) has been generated at 0.1° resolution for Europe by combining the land cover from the European Space Agency Climate Change Initiative (ESA CCI LC) with the reconstructed and projected annual land use changes of the Land Use Harmonized data set version 2 (LUH2). The LUCAS LUC dataset is tailored towards the needs of the RCM community and consists of annual plant functional type (PFT) maps from 1950 to 2015 (historical) and from 2016 to 2100 (multiple SSP/RCP scenarios). Within FPS LUCAS, it will be employed for the downscaling of CMIP6 projections with an ensemble of RCMs coupled to different land surface models (LSM).
Since each model has a different workflow of implementing land use and land cover information (e.g. various classifications or different treatment of multiple land cover fractions within model grid-cells) it is challenging to assure a consistent conversion of the LUCAS LUC time series into the individual model land surface schemes. Hence, we will present the workflow of implementing the LUCAS LUC dataset into a range of RCM-LSMs and provide recommendations for the use of LUCAS LUC in RCM experiments. In addition, we investigate the uncertainty due to the conversion into model specific input by analyzing a set of short-term test simulations for the EURO-CORDEX domain (EUR-11) with respect to land cover parameters and selected soil and near-surface atmospheric variables.
Hoffmann, P., Reinhart, V., Rechid, D., de Noblet-Ducoudré, N., Davin, E. L., Asmus, C., Bechtel, B., Böhner, J., Katragkou, E., and Luyssaert, S.: High-resolution land-use land-cover change data for regional climate modelling applications over Europe – Part 2: Historical and future changes, Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2021-252, in review, 2021.
Observations of clouds from satellites span now a period of more than 40 years. The Pathfinder Atmospheres–Extended (PATMOS-x) data set derived from the Advanced Very High-Resolution Radiometer (AVHRR) aims to build a consistent data archive to detect climate trends in cloud property characteristics.
The Global Energy and Water Cycle Experiment (GEWEX) Cloud Climatology Assessment evaluated the sensitivities of multiple global satellite cloud records allowing for more meaningful inter-comparisons and use with models (Stubenrauch et al. 2013). Using average values of cloud properties, such as cloud optical thickness (COD) or cloud top pressure (CTP), often in connection with static thresholds, doesn’t give robust results of long-term trends due to non-Gaussian behavior of these properties and other problems, such as saturation effects of thick clouds.
Weather states are refereed to typical weather regimes, such as low-level liquid clouds, mesoscale convection, or multi-layer situations.
Like the work by Jakob and Tselioudis (2003) for 2D COD-CTP histograms, we made a k-means cluster analysis to find typical weather states based on 3D cloud observations. The 3D model (“cloud cube”) represents an extension of the 2D model, which also includes cloud particle size (CPS). This approach provides a more complete analysis of cloud climatology and includes all relevant cloud parameters including cloud water content.
We will present a global survey of 35 years of 3D weather state data with global distribution and climate trends. We will also discuss practical issues such as processing time or amount of data.
Seasonal lake ice is sensitive to temperature fluctuations and long-term temperature trends. It is therefore a good indicator of climate warming, which will likely have dramatic impacts on lake ice phenology in northern latitudes. Beside the climate change aspect, lake ice data are important regarding transport and safety issues as well as for numerical weather prediction. In addition, changes in ice cover affect the ecology of the lake and water quality.
A new method, ICEmod, for assessing lake ice extent (LIE) using optical satellite data was developed at the Finnish Environment Institute. The method is based on multidimensional Gaussian distributions calculated for training data using several reflectance and thermal bands and their-related indices. Gaussian mixture modelling is an unsupervised learning technique, which requires no data-associated information on the classes in the training phase. However, there is an option for setting the number of resulting classes. In practice this means that the training dataset for lake ice detection must be quite comprehensive, including different types of ice (also snow-covered), water, and clouds, but it is not necessary to know at this stage which class a certain pixel represents. Importantly, ICEmod includes cloud detection as a part of classification, removing the need for separate cloud masking. It is also computationally effective, serving the purpose of providing daily global coverage. ICEmod classifies, on a pixel basis, inland/freshwater bodies as (1) Ice, (2) Open water, or (3) Cloud. For each pixel, the classifier makes the decision by estimating which class has the highest probability. This probability is also given for each class to describe the uncertainty of the classification.
The satellite data utilized for the 0.005° lake ice classification method introduced here consist of pre-processed top-of-atmosphere (TOA) reflectance and thermal brightness temperature data from the Sentinel-3 A/B (S3) SLSTR instrument (Level-1b). In addition to optical/thermal bands, the Normalized Difference Snow Index (NDSI) and the Normalized Difference Water Index (NDWI) are exploited. Thermal spectral bands with a native grid spacing of 0.01° are resampled to 0.005° to have consistent data for the classification algorithm. The pre-processed S3 SLSTR data are filtered, rejecting data with sun elevation < 17° and sensor view angle > 45°. The former is considered necessary due to potential disturbing effect of the low sun elevation on the SLSTR data (including the lack of light during polar darkness) and the latter is used to exclude mixed land/water pixels at the image edges with larger sensor footprint.
The SLSTR data set as described above was split into two temporally and spatially independent data sets: one data set was used as training data in the method development. The second data set was used to generate the LIE retrievals used for validation. The SLSTR data are combined with a land/water mask and the calculation is performed only for lake pixels. A special land mask was generated for the exclusion of mixed land/water pixels. The land mask needs to be as accurate as possible and slightly buffered to avoid misinterpretations of shoreline pixels and of shallow waters with vegetation.
We will present the methodology and validation results for the LIE retrievals. LIE was validated against lake ice extent maps generated from high-resolution Sentinel-2 Multispectral Instrument -images covering lakes in the different parts of the Northern Hemisphere, most of them representing both open water and ice pixels. The overall reference dataset consists of 118 predominantly cloud-free S2 MSI images; these were used for comparisons against LIE retrievals for ~ 100 lakes. The number of compared pixels is about half million.
The gained overall classification accuracy is 96%. The resulting omission error for ice is 6% and omission mainly occurs in lakes where the ice is very dark and fragmented. When the lake is completely ice-covered or ice-free, the overall accuracy is 100%.
This novel method is applied in the provision of the 0.005° Lake Ice Extent product (LIE-NH) for the Northern Hemisphere under the Global Land Service, a component of the Copernicus Land Service. Here, we also introduce the Lake Ice Extent products provided in the frame of a near real-time service since April 2021.
According to the tentative qualitative comparison with European Space Agency Climate Change Initiative Lake Ice Cover (LIC) product, LIE-NH based on ICEmod may provide more accurate lake ice information, and it also provides lake ice data for more than 13 000 lakes while LIC operates for 250 lakes. This comparison was made for 50 lakes and the aim was to gain basic understanding of the differences between these two products.
The interest in bistatic SAR systems for soil moisture monitoring has grown over recent years, since theoretical studies suggest that the impact of surface roughness on the retrieval of soil moisture decreases due to the simultaneous use of mono- and bistatic radar measurements. In the research presented, we evaluate the potential of monostatic and bistatic radar data to estimate soil moisture over bare agricultural fields based on experimental SAR observations in L-band, whereby the soil moisture retrieval performance obtained from monostatic SAR data is compared with those obtained using monostatic and bistatic data simultaneously, the so-called multistatic case. For this purpose, two scattering models are evaluated, i.e. the Oh model (Oh et al., 1992) and the Advanced Integral Equation model (AIEM) (Chen et al., 2003; Wu & Chen, 2004). This work has been accomplished in the frame of the ESA-BELSPO funded project BELSAR-Science.
We present a semi-empirical method to retrieve soil moisture over bare agricultural fields, based on effective roughness modelling, and apply it to a series of L-band full-polarized SAR backscatter and bistatic scattering observations. The main advantage of using effective roughness parameters is that surface roughness no longer needs to be measured in the field, what is known to be the main source of error in soil moisture retrieval applications. SAR observations are first used to calibrate roughness parameters that can then be used in an inversion scheme to retrieve soil moisture with higher accuracy. By means of cross-validation, it is shown that the proposed method results in accurate soil moisture retrieval results with errors well below 0.05 m3/m3.
Different experimental SAR monostatic and bistatic configurations are evaluated in this study for soil moisture retrieval over bare soils, making use of the Oh and AIEM scattering models. Results illustrate that including the cross-polarization scattering coefficient in the retrieval process is recommended in order to increase soil moisture retrieval performance. In addition, it was found that slightly better soil moisture retrieval results are obtained with the physically-based AIEM model compared to the semi-empirical Oh model.
Furthermore, the retrieval performance of a multistatic system has been evaluated and compared to that of a traditional monostatic system. The recent BELSAR campaign (in 2018) provides time-series of experimental airborne SAR measurements in two bistatic geometries, i.e. the across-track (XTI) and along-track (ATI) flight configuration. The bistatic angles between transmitting (SAR) and receiving (BISAR) antennas are very small (0.6° and 9° for the XTI and ATI configuration respectively). Furthermore, both sensors are left looking, so that bistatic scattering observations are only available in the backward region close to the incidence plane, and not in the theoretically more promising forward region. The results show that the simultaneous use of backscatter and bistatic scattering data does not result in a profound increase in retrieval performance for the bistatic configuration flown during BELSAR 2018. As theoretical studies demonstrate a strong improvement in retrieval performance when using backscatter and bistatic scattering coefficients in the orthogonal direction simultaneously, the introduction of additional bistatic airborne campaigns with more promising active-passive SAR configurations (i.e. bistatic scattering in the forward region with a special focus on the orthogonal and specular direction) is highly recommended.
We can conclude that the current method, using scattering models with modelled effective roughness parameters, performs already well, especially when combining radar observations from multiple polarizations. Further improvements are expected with bistatic observations from different configurations, but this needs to be verified.
Evapotranspiration (ET) is an essential parameter for the assessment of the terrestrial water balance and a better understanding of water related climate change dynamics. In this research study we focus on the estimation of ET over forested areas in central Germany (Thuringia) by SAR (Sentinel-1) remote sensing. ET can be modelled and determined at different scales, starting from small-scale in-situ measurements up to global estimation e.g., using remote sensing methods and products.
In this study, we compare independent time series estimates of ET of two different data sources and products with the C-Band SAR backscattered signal of the Copernicus Sentinel-1 mission and investigate the capacity of SAR remote sensing to assess seasonal changes in ET. Especially, we investigate ET estimates based on (1) in-situ meteorological data and the FAO-based Penman-Monteith equation and (2) the well-established global terrestrial ET product from the Terra and Aqua MODIS sensors [1,2].
The analysis has been performed in an area of 2452 km² situated in central Germany, considering a five-year time series from summer 2016 to summer 2021. Meteorological data from four different local meteorological stations were used for the estimation of in-situ ET, using the FAO-modified version of the Penman-Monteith equation in order to consider only the parameters available at the stations, i.e., global radiation, mean temperature, minimum temperature, wind speed and elevation (1). The MODIS product represents an averaged ET of a 500 m² area for each data point over an observation period of eight days, calculated based on land cover and vegetation parameters (LAI, FPAR) along with meteorological data (2).
Both derived ET time series were examined at the station locations and compared with Sentinel-1 time series over coniferous forests situated within a radius of 5 km around each station. The SAR time series within the coniferous forests are characterized by a seasonal pattern showing higher backscatter during summer and lower backscatter during winter (difference of about 1.5 dB). Although the resolution of the different products is very different, we observed a strong temporal correlation (concurrent dynamics) between all ET estimates and the C-band backscattered signal: the ET of coniferous forest is higher in summer than in winter (difference of about 8 mm/day, or 300 kg/m²/8day). The correlation between in-situ ET estimates at the ground station and the Sentinel-1 signal amounts to R²>0.7 after temporal smoothing and the correlation between MODIS ET estimates and the backscatter signal of coniferous forest results in an R² of 0.8. These results suggest that the temporal SAR backscatter signal is sensitive to the yearly ET cycle in the investigated coniferous forest, considering that no seasonal structural changes happens in coniferous areas.
Further investigation are under way to better characterize the relationship between ET and the SAR signal, considering other forest types.
[1] FAO. (2019). FAO Penman-Monteith equation: Equation. Food and Agriculture Organization of the United Nations. https://www.fao.org/3/x0490e/x0490e06.htm#equation (last access 19.11.2021)
[2] Running, S. W., Mu, Q., & Zhao, M. (2017). MOD16A2 MODIS/Terra Net Evapotranspiration 8-Day L4 Global 500m SIN Grid V006. https://doi.org/10.5067/MODIS/MOD16A2.006
The rate at which land surface soils dry following rain events is an important feature of the climate system. Surface soil moisture (SSM) drydowns, i.e. the soil moisture temporal dynamics following a significant rainfall event, play a crucial role in determining the surface water budget. In particular, they influence the partitioning between runoff, drainage, and evaporation. They are also important when predicting the water availability for vegetation, and the occurrences of droughts and heatwaves. As such, improved understanding and characterization of the drivers of SSM drydowns will give fundamental and combined insight into the coupling between the carbon, water, and energy cycles. Furthermore, the associated variations of land surface temperature (LST) during drydowns can be used to understand evapotranspiration rates and calibrate the surface-soil thermal properties.
Within the Climate Change initiative (cci), efforts are ongoing by the soil moisture and LST communities to collate the growing amounts of Earth observation data into user-friendly global products of high temporal and spatial resolutions (SM_cci and LST_cci, respectively). These products provide a unique opportunity to evaluate and calibrate land surface models used to represent the terrestrial part of wider Earth system models. In this presentation, we demonstrate how LST_cci and SM_cci can be used in synergy to improve key model parameters through data assimilation techniques and hence improve the representation of SSM drydowns in the model.
Using the ORCHIDEE land surface model, we first show how the SM_cci and LST_cci products can be used to identify sensitive model parameters. We then test the complementarity of both data streams by assimilating them both individually and simultaneously, helping us identify the information content brought by each data stream. The optimizations are performed using ORCHIDAS, the Bayesian data assimilation framework set up around ORCHIDEE. We conclude by evaluating the optimized model’s ability to simulate drydowns by confronting it with independent data not used in the calibration.
In the context of global warming and rapid changes of the land use by human economical activities, it is fundamental to be able to accurately estimate and understand trends of key variables such as the sensible (H) and latent (LE) heat fluxes. The sensible heat flux represents the amount of energy transferred by convection and/or conduction from the surface to the atmosphere. The amount of energy and water consumed by evaporation corresponds to the latent heat flux and the evapotranspiration process. By materializing the exchange of water and energy from the earth surface to the atmosphere, the latent and sensible heat fluxes control the development of the planetary boundary layer and govern the land-atmosphere interaction (Michel et al., 2016; Behrendt et al., 2019). They play a major role in the hydrological cycle (Oki et al., 2006), carbon cycle (Sellers et al., 1997) and surface energy balance (Trenberth et al., 2009). Thus various applications such as water resource management, agricultural planning, weather forecasting, drought/flood detection, etc, are possible thanks to their estimations (Fisher, 2017; Liou et al., 2014 and reference there in). For instance, monitoring of H and LE allows the detection of desertification, monsoon circulation and climate change (e.g. Yang et al., 2009; Wang and Li 2011; Shan et al., 2015). Therefore, getting homogeneous long-term time series (at least few decades) and catching both long and natural or human-induced short-scale trends of turbulent heat fluxes is crucial. In this frame the Satellite Application Facility on Climate Monitoring (CM SAF) of EUMETSAT develops archives of satellite-derived products to support the understanding of the climate. During the Third Continuous Development and Operations Phase (CDOP-3), the CM SAF is extending its product portfolio with a Thematic Climate Data Record (TCDR). The Regional Land Fluxes TCDR will provide, over a period of almost 40 years (1983-2020), various parameters depicting the surface states and radiation fluxes, including the Surface Radiation Balance (SRB), the Cloud Fractional Cover (CFC), the Land Surface Temperature (LST), the Evapotranspiration (ET), and the Latent (LE) and Sensible (H) Heat Fluxes. For this purpose, an adapted version of the methodology developed by the Land Surface Analysis (LSA) SAF (Ghilain et al., 2011, 2019) has been used. The latter has been adapted from the Tiled ECMWF (European Centre for Medium-Range Weather Forecasts) Scheme for Surface Exchanges over Land (TESSEL) model (Van den Hurk et al., 2000) allowing the use of satellite-based data and numerical weather prediction (NWP) models’ outputs (ECMWF reanalysis) as forcing. Observations from the Meteosat Visible and InfraRed Imager (MVIRI) and the Spinning Enhanced Visible and Infrared Imager (SEVIRI), onboard of respectively Meteosat First and Second Generation (MFG and MSG), are used as inputs for all radiation components - including the Surface Incoming Solar radiation (SIS), the Surface Albedo (SAL), the Land Surface Temperature (LST), and the Surface Downward Longwave radiation (SDL) - are jointly retrieved using the CM SAF software “GeoSatClim”. In addition, soil moisture and land cover CDR from the European Spatial Agency (respectively Dorigo et al., 2017 and Bontemps et al., 2012) and the Leaf Area Index (LAI) from the GLOBMAP dataset (Liu et al., 2012, 2017) are also used as inputs. The data are provided hourly at a spatial resolution of 0.05 degrees (i.e. about 5.5 km) and cover the Meteosat Disk (60° N–60°S and 60° W–60°E). The development stage and preliminary validation results' of this new climate data record will be presented.
Evaporation is one of the terms of water budget equations and contributes to describe the water cycle. With the current trend of rising air and water temperatures all over the world, evaporation is also expected to increase and contribute to the depletion of usable water. Controlling and monitoring evaporation rates from water surfaces therefore becomes of primary importance for evaluating the availability of this vital and increasingly scarce resource. One of the most efficient ways to monitor along time with recurrent and comparable observations of the same target is the use of Earth Observation (EO) products. That’s why we come to propose an EO-based model for the retrieval of Evaporation fluxes generated from Lake Surfaces (EO-LSEv). The model exploits Lake Surface Water Temperatures (LSWT) derived from satellite acquisitions and combines them with meteorological variables (i.e. air temperature, wind speed, relative humidity of air) following the bulk transfer theory (Dalton’s law). The EO-LSEv model outputs are instantaneous (i.e. at satellite overpass) and daily evaporation maps, which in turn can be used to calculate water volume lost by the water bodies due to evaporation.
The EO-LSEv model was first tested on a well-known study site (i.e. Lake Garda, Italy) where the ESA CCI Lakes dataset is available for a long period of time. Here we propose an application of the EO-LSEv model performed within the context of the HYdro-POwer-Suite (HYPOS) H2020 Project. HYPOS intends to support hydropower industries in their planning, monitoring and assessment tasks with an easy and cost-efficient access to data and through a Decision Support Tool (DST). The implementation of the EO-LSEv model within the DST will allow the estimation of the volume of water evaporated from hydropower reservoirs. Such an information can drive dam managers to adjust water withdrawal according both to people needs and water losses due to evaporation. In this way, the management of water can be done in a more efficient way maximizing productivity and minimizing water waste. In addition, evaporated water volume, combined with plant energy production data, can be used for the estimation of the “Blue Water Footprint” (BWF) of the system. The BWF gives an estimate of the efficiency of a hydropower plant. Since the BWF can be calculated at different temporal scales, this data can theoretically also be used by water managers to follow the monthly variations of the BWF of the plant and be more aware of water consumption. We present the estimation of the evaporation rates and BWF for two study sites: the Banja reservoir in Albania and the Enguri reservoir in Georgia (EU) for the period 2019-2020. The results allow assessing the temporal variability of evaporated water volumes along the two years, the differences between the evaporation volumes of the two reservoirs, and the differences of the BWF between the two hydropower plants.
Fresh water is an essential resource for both society and environment. The significant stress on this resource is constantly increasing. Monitoring inland water stocks is thus a political, environmental and economical challenge. The use of altimetry to measure water levels is a well-established technique that has evolved over the past three decades. With the launch of CryoSat-2 in 2010, carrying the first Synthetic Aperture Radar (SAR) altimeter, an along-track sampling of 300 m has allowed accurate measurement of smaller targets, and the dense ground track sampling has allowed substantially more lakes and rivers to be monitored in comparison to previous missions.
Outside the altimetry community, hydrologists benefit from an along-track altimeter product, which is used to estimate river discharge and hydraulic modelling through data ingestion, parameter calibration and flow validation along the water course. Nevertheless, the current along-track products are not easy to use for non-expert users and require a detailed understanding of the altimetry system.
An activity called Cryo-TEMPO (CryoSat-2 ThEMatic PrOcucts) was launched by ESA to better exploit the CryoSat-2 measurements. The overarching aim of Cryo-TEMPO is to develop agile, robust and state-of-the-art CryoSat-2 products, which are dedicated to specific Thematic Areas, and which are accessible to – and can easily be used by – a broad range of scientific and service users, beyond the traditional altimetry experts. To achieve this aim, the main technical objectives of the study are as follows:
To implement dedicated, state-of-the-art processing algorithms over each thematic domain.
To develop agile, adaptable processing workflows, that are capable of rapid evolution and processing at high cadence.
To create products that are driven by, and aligned with, user needs; thereby opening up the data to new communities of non-altimetry experts.
To deliver transparent, traceable uncertainties associated with each thematic parameter.
Thus, as part of the ESA Cryo-TEMPO Project (CryoSat-2 ThEmatic PrOducts), the inland water Thematic Data Product (TDP) is designed to be a state-of-the-art Geophysical level (L2-type) product for users outside the altimetry community and it will include water levels based on the most appropriate retracker. In this poster, we present the retrackers that will be analysed during the first phase of the project (MLE4, OCOG and TFMRA) over different water bodies such as rivers and lakes of different sizes and environments during a 10-year period and considering the three different acquisition modes of the CryoSat-2 radar instrument (LRM, SAR and SARin). Results of the validation phase are also presented referring to the comparison against ground recorded water level for some stations over rivers and lakes.
The accurate monitoring of the surface water storage as an important component of the land surface models requires a realistic representation of the river network and channel characteristics such as width and depth. Since such observations have not been available for many river basins and becoming less available in many gauged rivers, crucial questions about the spatio-temporal dynamics of the freshwater in the river network cannot be answered properly.
The global coverage and fine temporal resolution of the satellite images provide the opportunity to generate dynamic river masks on a global scale for almost all river basins. However, due to the complexity of extracting the river masks from the satellite images, no dataset specifically for dynamic river masks has been developed yet. In this regard, the efforts have been limited to the development of static river masks extracted from long-term averaged satellite images.
The Global Surface Water (GSW) dataset developed by The European Commission’s Joint Research Centre presents unique monthly records of surface water extents with global coverage from 1985—2020 using the entire archive of the Landsat 5, 7, and 8 imagery. Although a sophisticated classification algorithm has been designed to generate the GSW monthly maps, various sources of contamination deteriorate the accuracy of surface water extent extracted from the raw GSW maps. In the case of the rivers, the complexity of river reach morphology may also be a barrier to obtain high-quality dynamic river masks from GSW dataset.
In this study, we propose a region-based image classification algorithm to extract and enhance the dynamic river masks obtained from GSW dataset by incorporating temporal and spatial constraints in the image stack. We apply the proposed algorithm over the whole globe and validate the obtained dynamic river reach masks against the in situ discharge and water level time series.
Evapotranspiration (ET), the combination of abiotic evaporation and biotic transpiration (T), is a key process for catchment hydrology, irrigation agronomy, and to monitor ecosystem functional properties. However, ET is a complex phenomenon that results from the interaction of numerous variables related to the soil-surface, atmosphere and vegetation, making it highly variable and difficult to estimate. Remote sensing, especially thermal-based surface energy balance (SEB) models, are increasingly important to monitor ET at different spatial and temporal scales. However, SEB models face greater uncertainty over complex landscapes with multiple and mixed vegetation types, such as savannas or tree-grass ecosystems (TGEs). Their dual vegetation strata, the grass dominated understory and tree dominated overstory challenge models compared to more homogeneous and energy-limited landscapes. Along with this, the contribution of grasses and trees to total transpiration (T) is still largely unknown nor quantified in TGEs. Therefore, a three-source energy balance (3SEB) model, accommodating an additional vegetation source within the well-known two-source energy balance (TSEB) model, was developed and tested over various FLUXNET eddy-covariance (EC) sites. Additionally, as a proof-of-concept, 3SEB was implemented at the continental scale using Meteosat Second Generation (MSG) satellite imagery. 3SEB robustly simulated latent heat (LE) in all sites (site-level: LE RMSD ~ 60 W/m2; MSG-level: LE RMSD ~ 90 W/m2), improving over both TSEB and a seasonally changing TSEB (TSEB-2S) models. In addition, 3SEB inherently partitions water fluxes between the tree, grass and soil sources, where modelled T correlated highly with a data-driven EC T estimates (r > 0.76). The T/ET was found positively related to both rainfall and leaf area index, especially compared to the decomposed grass understory T/ET. Tree and grass transpiration had contrasting relations with respect to monthly rainfall, demonstrating the importance in decomposing total ET into the different vegetation sources, as they have distinct climatic drivers, and hence, different relations to seasonal water availability. These promising results improved ET estimations over complex TGEs, which may contribute to enhance drought monitoring and understanding, and their responses to climate change feedbacks.
Soil moisture plays an important role for the water and energy budget on all spatial and temporal scales. Agricultural yield, runoff generation and other hydrological processes depend on this essential climate variable. Measurement methods therefor exist on a variety of spatial and temporal scales, from continuous point scale measurements to satellite observations with distinctive overpass times. These measurement methods vary strongly in their capability to capture the specific dynamics of soil moisture content in response to impulses from precipitation or snowmelt events in-between periods of slower moisture recession. Spatial distributed soil moisture data at high temporal resolution is sought-after by the hydrological scientific community.
For most parts of the globe precipitation is the main driver for rapid changes in soil moisture. On global scale the temporal resolution of satellite soil moisture measurements does oftentimes not suffice capturing such events. Measurements of rainfall however are available at even sub-hourly levels at a spatial resolution of several kilometers from satellite-based measurement missions. So, the high-resolution records of precipitation contain useful information for a timely coverage of soil moisture increase.
This study presents a global hourly soil moisture product that exploits an advanced antecedent precipitation index (API) and hence utilizes the direct connection of precipitation to soil moisture development. NASA’s renown Global Precipitation Measurement (GPM) mission’s Integrated Multi-satellitE Retrievals for GPM (IMERG) final product provides moisture input to the algorithm. Besides precipitation data also soil texture information from the global available SoilGrids project and ERA5 reanalysis temperature data from ECMWF are used to control the spatially diverse soil moisture dynamics. In the API calculation, the current soil moisture value for a pixel is based on the preceding time step’s moisture data, which rises according to precipitation amount and saturation state, and decreases depending on locally given soil conditions and prevailing temperature. The algorithm utilizes empirical factors to further regulate the impact of named variables. These factors have been optimized on point measurement data from the International Soil Moisture Network (ISMN) for several climate zones and respective soil conditions. The relationship between local sand content and empirical factors on point scale is utilized to create a global representation thereof using spatially distributed sand content from the SoilGrids project.
The GPM API global hourly soil moisture data set is calculated for the period of 2015-2020, at a spatial resolution of .1 degree latitude/longitude, the same as the GPM IMERG product. The study compares the API data set with the ESA CCI Soil Moisture (SM) product (v6.1, break adjusted). Higher agreement of the spatially distributed API with ISMN station data has to be stated compared to the satellite soil moisture product’s performance (mean (ub)RMSD: (4.68) 5.80 Vol%, mean bias: 3.65 Vol%, mean of Pearson’s R: 0.78). ESA CCI SM shows a positive bias compared to measurement data at many stations (mean bias: 5.83 Vol%) and higher error values (mean (ub)RMSD: (5.03) 8.94 Vol%). Furthermore, different soil conditions are visually traceable in API time series data; local soil properties lead to matching moisture dynamics in the calculated soil moisture product. Summer dry-downs are well reproduced as are soil moisture upsurges after precipitation events.
This approach extends previous work on precipitation related soil moisture indices, however, delivers a global spatially distributed hourly data set.
Soil moisture (SM) is a critical part of the terrestrial water cycle, drives land–atmosphere interactions, and can represent hydro-climatic extremes such as floods and droughts. Numerous SM products from remote sensing and modeling were developed within the last decades to investigate SM dynamics on a large scale. However, a manifold of their retrieval algorithms, resolutions and coverages of horizontal, vertical, and temporal domains make a fair intercomparison challenging. The focus of this study is the intercomparison of the temporal SM dynamics of 15 selected SM products over 25 field sites in Germany using SM estimations from ground-based sensors of the Cosmic-Ray Soil Moisture Observation System (COSMOS) as a reference. A temporal coverage of 2015–2020 was selected, covering the European drought of 2018/19. COSMOS instruments cover a larger horizontal and deeper vertical representation at a single location than other ground‐based SM sensors. Thus, SM estimations from COSMOS intrinsically average out the spatial heterogeneity of the surrounding environmental properties and cover the dynamics of both, surface SM (SSM) and root-zone SM (RZSM). This makes them a valuable ground reference for the validation of coarse-resolution SM products from remote sensing and modeling on the horizontal domain. On the vertical domain, the deeper vertical representation of COSMOS estimations is a challenge for the validation of SM estimations from remote sensing which capture SSM dynamics only. The newly released extensive COSMOS Europe data set contains hourly time series of {\it in-situ} SM at many locations. It allowed a comprehensive intercomparison and validation of the selected SM products over locations of different land cover types in Germany. We have selected SSM products from single remote sensors (AMSR2 L3, ASCAT L3 (H115/H116), Sentinel-1 L2, SMAP L3E, and SMOS L3), from dual sensors (Sentinel-1/ASCAT L3 and SMAP/Sentinel-1 L2), and from multiple sensors (ESA CCI and NOAA SMOPS). These pure SSM products have furthermore been vertically extrapolated using an exponential filter to additionally investigate their potential of resolving RZSM dynamics. In addition, we have selected products that already comprise both SSM and RZSM. These were obtained either through the assimilation of remote sensing SSM estimations into models (ASCAT L3 (H141/H142), SMAP L4, GLDAS-2 L4, and GLEAM), through exponential filtering of remote sensing SSM estimations (SMOS L4), or through reanalysis (ERA5-Land). The latest versions of the products have been chosen, respectively. We found that all selected products show a similar seasonal variability, but represent the sub-seasonal variability differently. For this we have analyzed bias and uncertainty estimations as static (over the complete time series) and dynamic (using a moving window) measures, respectively. The match of SM dynamics of the selected SSM products with the SM dynamics obtained from COSMOS increases after applying an exponential filter. The same is true for the comparison of SM dynamics from COSMOS with those within lower layers of RZSM products. Nevertheless, the RZSM dynamics cannot be completely resolved by the selected products, neither by exponential filtering of given SSM data nor by published RZSM data. This can especially be seen during the European drought of 2018/19. Our findings contribute to providing a systematic evaluation of state-of-the-art large-scale SM products and insights on how to improve SM estimation. Future work is needed to extend our study to a European scale to increase the complexity of environmental properties of the ground reference field sites.
Evapotranspiration (ET) is the key variable linking the water, energy and carbon cycles of our planet. In the water cycle, it is the second-largest flux after precipitation. ET is a complex phenomenon that depends not only on the atmospheric and vegetation conditions (demand) but also on the availability of water in soil, water bodies, canopy (supply). The complexity in measuring ET directly make it difficult and expensive to have a dense network of in situ gauging stations that are required to routinely measure and capture the spatial variation of ET. So far, the most successful network of ground ET measurements is the Eddy Covariance flux towers network (FluxNet), which are mostly located in North America, Europe, and North Asia. In these regions, Earth Observation (EO) is still valuable for monitoring ET over a large and heterogeneous landscape since the spatial support of in situ instruments is small. In regions with scarcer in situ measurement, EO data is even more invaluable. Advances in EO provide great opportunities to monitor, at a large scale and with increasing spatial and temporal resolution, variables that control ET, such as vegetation indices (NDVI, EVI, LAI), land surface temperature, and relative soil moisture content. As ET is not directly sensed from space, these controlling variables are the key input variables of models to estimate ET. Many models have been developed to estimate ET from optical and/or thermal remote sensing (RS) data, to name a few SEBS, TSEB, SEBAL, METRIC, ALEXI. Many of these models also require ancillary data (such as soil characteristics, land-cover, etc.) and are sometimes combined with in situ data. Different algorithms, parameterization, input EO data sources and processing levels generate a wide range of ET values. Moreover, the methods used to aggregate instantaneous RS observations also cause uncertainty in daily or longer-period estimates, which is needed for water management practices. While there have been many studies evaluating the different ET models and products, there has not been a single model that performs the best in all situations. Therefore, expertise to process input RS data, calibrate and evaluate model outputs are still very crucial to improve the estimation accuracy. Driven by community needs, some projects have provided platforms to increase access to various remotely sensed ET (RS-ET) data products, such as the FAO’s WaPOR and OpenET. With these widely accessible RS-ET data, it is important to assess their uncertainties and provide this information to users. This study provides an overview of the sources of uncertainties in RS-ET and the current advances in their assessment. It is based on a systematic quantitative literature review of 768 original studies that assessed the uncertainty or accuracy of RS-ET model or data products. The review examined (i) ET estimation methods, (ii) source of uncertainties, and (iii) techniques for uncertainty assessment. Research articles were published in 149 journals from diverse disciplines including hydrology, ecology, meteorology, remote sensing, and agronomy. RS-ET assessment has been geographically concentrated in North Asia, North America, and Europe. Most studies used the validation method, which quantifies the discrepancy between pixel-based ET estimation with an in-situ estimation. Most validation studies employed Eddy Covariance (EC) flux towers for reference estimation. In regions where in-situ measurements are limited, many studies use residual of the water balance as reference. However, few studies considered uncertainty in the reference estimation and mismatch of spatial and temporal scales. For crop water consumption monitoring, most RS-ET methods have been reported with high accuracy. However, when upscaling to larger regions including non-crop areas, additional assessments are required to better inform data users of the quality of RS-ET estimation, including cross-validation, sensitivity, and uncertainty analyses.
Drought, one of the most hazardous climate extremes with severe implications for food and water security, has increased in the intensity and duration in many regions over the last decades. The propagation of drought-related precipitation water deficits through the water cycles is insufficiently understood at the global scale, as the respective observation-based data was largely lacking. Benefiting from the growing suite of satellite-based Earth observations, in-situ measurements and machine learning techniques, global gridded observation-based datasets of soil moisture, evapotranspiration and runoff have become available to date. Here we analyze and compare the responses of green-water evapotranspiration and blue-water runoff to soil moisture drought occurring between 2001 and 2015. We find that runoff and evapotranspiration respond differently to drought. Runoff is widespread reduced with strongest deficits in energy-controlled regions. Gradually reductions are found before and after the drought peak, revealing a strong coupling with the soil moisture evolution. By contrast, evapotranspiration is only reduced during soil moisture drought in water-controlled regions, and it increases in energy-controlled regions following the drought-related temperature and radiation surpluses. Evapotranspiration anomalies are mostly found before and during the soil moisture minimum rather than after the drought peak, as vegetation can directly benefit from the precipitation after drought peak. Furthermore, vegetation types modulate the drought propagation; evapotranspiration is more strongly elevated (or less strongly reduced) in tree-dominated regions compared with grass-dominated areas, which in turn induces stronger runoff deficits. In addition, land surface models simulate a similar decrease in runoff under drought, while they do not capture the evapotranspiration drought response. This is likely due to the misrepresentation of soil moisture-vegetation interplay. Our study provides a better understanding of drought impacts on the global water cycles across climate regions and vegetation types, which can help to mitigate drought impacts on blue and green water in the future.
Water Level Monitoring Over Continental Areas From Fully Focused SAR Altimeter Processing
M. Vayre1, S. Amraoui1, T. Moreau1, N. Taburet1, F. Borde2, F. Boy3, S. Le Gac3, N. Picot3
1. Collecte Localisation Satellites, France
2. European Space Agency, The Netherlands
3. Centre National d’Etudes Spatiales, France
Contact: mvayre@groupcls.com
Session: A7.02 Hydrology and Water Cycle - EO advances in water and energy cycles
Presentation type: Oral
The access to fresh water becomes increasingly difficult for many local populations. This natural and incompressible need for living beings leads to economic and geopolitical stakes. The knowledge of inland water resources represents a major challenge to anticipate floods hazards, assess inland waterways sailing conditions and estimate freshwater stocks or rivers discharge. At the same time, public in situ data are decreasing and heavy modifications of the water cycle are expected due to the current global warming and intensive deforestation. It is against this background that the use of altimetry data to monitor surface water levels over continental areas has been considered by altimetry experts since the beginning of the spatial altimetry in the 1990s.
Inland water observation has never been the main science objective of altimetry missions. However, consistent water surface height measurements have been collected from these satellites, especially using SAR-mode altimeters. SAR-mode altimetry is currently functioning onboard operational missions such as Cryosat-2, Sentinel-3A/B and more recently Sentinel-6. It has brought significant improvements when comparing with to conventional altimetry including inland waters areas. The scientific community feedbacks are in favour of a more systematic use of SAR-mode altimetry in the future (e.g. upcoming Sentinel-3C/D of the Copernicus programme).
In current operational ground segments, the SAR-mode processing is based on the so-called unfocused SAR (UF-SAR) processing. It performs the coherent summation of pulses over a limited number of successive pulses (64-pulses bursts of a few milliseconds in length). The possibility to realize coherent pulses summations has been recently [Egido and Smith, 2017] extended to the whole illumination time of the surface (from few milliseconds to more than 2 seconds). It allows the increase of the along-track resolution from 300 m (UF-SAR) to the theoretical limit of approximately 0.5 m (FF-SAR). Improving the effective number of looks, the capability for obtaining consistent measurements over reflective surfaces of small size is thus enhanced. As part of ESA and CNES project we have developed the SMAP open-source software (FFSAR Standalone Multi-mission Altimetry Processor).
The purpose of this talk is to present the FF-SAR processing performed over hundreds of water bodies (French rivers, the Amazon, the Congo, the Niger …) over a 2-year period (April 2019 – April 2021) from Sentinel-3A and Sentinel-3B measurements acquired in Open-Loop mode. The benefits from a user perspective will be addressed in terms of measurements precision, timeseries stability and ability to track water bodies of small sizes (sometimes lower than 100 meters in width). The advantage related to the improved along-track resolution to select consistent measurements will be shown. Metrics will be presented to understand the main differences when comparing with the UF-SAR. Results from in situ and comparisons with the ICESat-2 laser altimeter product will be also provided. The instrument’s impulse response “replica” issue and its implications for the FF-SAR processing will also be discussed regarding Sentinel-3 and Sentinel-6 relative performances.
AI has been used for 30 years to process Earth Observations (EOs) and estimate geophysical variables. NNs are often used in this context as black-boxes in the sense that what is done inside the NNis not monitored. A new paradigm is proposed here where the NN statistical inference ability is combined with a physical expertise on the problem to design a hand-tailored deep architecture and learning scheme. This hybrid approach benefits from the efficient optimisation tool of NNs (the back-propagation algorithm) to estimate the parameters of a complex architecture of processing layers representing interconnected physical modules. This deep architecture incluses calibration procedures, mixture models for data-fusion, a stabiliser to ensure the physical nature of variables, a post-filtering closure module, and a closure physical constraint. The NN scheme is trained using basin-scale data to find the best integration compromise, applicable globally. Compared to traditional “optimal interpolation", the AI integration can be done at pixel-level, calibration and mixing models are obtained simultaneously with the water budget closure, environmental variables can be exploited by the NN to extrapolate the model to unmonitored regions. The NN integration scheme allows combining satellite estimates of individual water components in a hydrologically coherent way, and to create a “closed" water budget at the global scale. This obtained dataset is useful: to analyse the limit of each original EO, to evaluate existing global hydrological models, to analyse hydrologic changes in several hot spots in the world, and to understand the effects of climate change or land-use change.
Inland surface waters expand with floods and contract with droughts, however there is no one map of our streams. Several studies of precipitation levels have shown that there is an unaccounted-for volume of flowing surface water that has yet to be observed. This is expected as current satellite approaches are limited to bimonthly observations that map only the widest streams (up to 90m)[1]. Smaller tributaries that make up to almost 50%[2] of the dendritic surface network remain unobserved. A map of those streams over time could give us early warnings of droughts and could provide a better understanding of the impermanence of our waters, showing where to expect water, and where not to.
To that end, we feed the latest high-res sensor data to multiple deep learning models in order to map these flowing networks every day, stacking the times series maps over several years. In particular, we extended our previous work of [3] and developed a multi-sensor neural net that fuses a multi-day window of 3m PlanetScope imagery with 1m LiDAR derivative products to produce higher resolution water probability maps (50cm). For this, we trained this network on dry water polygons extracted from very high resolution (VHR) WorldView-3 images. We compared the multi-sensor network with native VHR networks and show that the former are able to reliably detect streams up to 5-7m wide; additionally we conducted an ablation study to show the contribution of each LiDAR derivative on the model performance.
We ran this multi-sensor model on a 24km² area over a 2-year daily PlanetScope time-series to produce per-pixel water probability maps and we aggregated these maps over an elevation-derived synthetic valley network to produce a snapshot of flow at the stream level. The end result is a daily product of water probability per-stream that could be used to derive flow frequency or to produce early warnings of droughts. Applying this process at a national scale, could fundamentally improve how we manage our water resources around the world.
[1] Jean-François Pekel, Andrew Cottam, Noel Gorelick, and Alan S. Belward. High-resolution mapping of global surface water and its long-term changes. Nature, 540(7633):418–422, December 2016
[2] George H Allen and Tamlin M Pavelsky. Global extent of rivers and streams. Science,
361(6402):585–588, 2018
[3] Dolores Garcia, Gonzalo Mateo-Garcia, Hannes Bernhardt, Ron Hagensieker, Ignacio G. Lopez-Francos, Jonathan Stock, Guy Schumann, Kevin Dobbs and Alfredo Kalaitzis Pix2Streams: Dynamic Hydrology Maps from Satellite-LiDAR Fusion. AI for Earth Scienes Workshop, 34rd Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada
PCR-GLOBWB (Sutanudjaja et al., 2018, https://doi.org/10.5194/gmd-11-2429-2018) is a global hydrology and water resources model that has been developed over the past two decades at Utrecht University. The most recent version of the model has a spatial resolution of 5 arc minutes (approx. 10 km at the equator) and runs on a daily temporal resolution for simulating several decades. PCR-GLOBWB includes a two-layered global groundwater model based on MODFLOW, that is used, for example, for assessing the effects of climate change and excessive groundwater pumping on groundwater resources.
Driven by the need for hyper-resolution global hydrological modelling (Bierkens et al., 2015, http://dx.doi.org/10.1002/hyp.10391), we refined the transient 5-arcmin global groundwater model (de Graaf et al., 2017, https://doi.org/10.1016/j.advwatres.2017.01.011) to 30 arcsec resolution (approx. 1 km). Since this results in a model that is roughly 100 times larger, this presents new challenges for tackling the increased computational and memory requirements as well as for managing the increased data volumes. In our approach, we use a distributed memory parallel version of the computer code MODFLOW 6 that is being developed together with the USGS. Applying unstructured grids for canceling many sea cells and land cells for unconfining layers, results in a total of 278 million active cells subject to solving. Decoupling and sorting the landmass cells to meet the near-sea Dirichlet boundary condition results in three continental scale MODFLOW 6 models, i.e., Afro-Eurasia (167 million cells), America (77 million cells), Australia (16 million cells), and one MODFLOW 6 model containing the islands (17 million cells). Parallel pre-processing is applied using 15-degree tiles to generate submodels, requiring processing of ~15 TB monthly input data for 1958-2015. Straightforward METIS partitioning (http://glaros.dtc.umn.edu/gkhome/metis/metis/overview) is considered as well as lumped METIS partitioning using catchments. The idea behind the latter partitioning is to give partition boundaries more physical meaning to users and to pre-sort on future coupling with surface water modules. As an illustration, we use rasterized HydroBASINS catchments (https://hydrosheds.org/page/hydrobasins) for different Pfafstetter levels and compare simulation run-times for the lumped METIS partitioning.
All our experiments are conducted on the new Snellius Dutch national supercomputer (https://servicedesk.surfsara.nl/wiki/display/WIKI/Snellius) which is a heterogeneous cluster with more than 500 nodes. We use the thin compute nodes each consisting of two 64-core AMD Rome 7H12 CPUs and 256 GB RAM. Instead of using as many as possible nodes on the Snellius, we choose a practical number of nodes from the user perspective. We target on performing one single transient global-scale simulation for 1958-2015, including a spin-up of 20 years, in less than 16 hours corresponding to standard non-business hours. This requires a computational speed of ~120 simulation-year-per-day (SYPD), and a necessary speedup of roughly 60-80 compared to a serial run. We show such speedup can be achieved with our approach using 10-15 nodes, each running with 32 cores.
In this study we focus mostly on the technical realization of the 30 arcsec model, and pay limited attention to its parameterization, which by itself is a significant challenge. However, validation with head measurements from the USGS NWIS database shows significant improvement for the steady-state model when compared to the 5 arcmin version, with still limited improvement for the transient model simulations. Since our implementation requires a relatively small number of cores, we believe that further parameter improvement and future global scale scenario analyses are in range.
River discharge integrates all water-related processes over land, it is crucial for hydrology. Unfortunately, in situ measurements are very sparse at the global scale. This paper presents a totally new approach for the continuous mapping of the river discharge based on indirect satellite observation and the water budget balance. The proposed method first corrects continuous satellite estimate of three water components (precipitation, evapotranspiration, and total water storage change) at basin scale using river discharge from a few gauge measurements. Secondly, it balances the water budget at the pixel level using flow direction for horizontal water exchange. This new approach is therefore based solely on satellite products and in situ measurements without the use of any model. The methodology is evaluated in interpolation mode between successive gauges (median KGE is 0.8) and extrapolation mode over small confluent rivers (mean KGE is 0.6), to be compared to 0.6 and 0.4 for a surface model such as CaMa-flood. In addition to offer a totally new source of information for a strategic hydrological variable, the accuracy of the obtained river discharge estimate opens new doors for improving hydrological models and facilitates the assimilation of satellite observations such as GRACE data. The spatially continuous river discharge shows a good agreement with altimetric water surface elevation (median correlation of anomalies is 0.6) and surface water extent satellite estimate (correlation of anomalies is up to 0.7 over rivers with floodplains). A comparison with the hydrological model CaMa-flood shows the benefit of such a continuous product to detect model biases at high or low flow, as well as early flow peak modeling. Futhermore, spatially continuous river discharge offers a unique opportunity to investigate spatial patterns of extreme events. It also allows for the estimation of important river parameters (i.e. river bed elevation) continuously along the river that could largely benefit hydrological modeling.
References:
• Pellet, V., F. Aires, D. Yamazaki, Satellite monitoring of the water cycle over the Amazon using upstream/downstream dependency. Part 2: Mass-conserved reconstruction of total water storage change and river discharge, submitted to WRR, 2020.
• Pellet, V., F. Aires, D. Yamazaki, F. Papa, Satellite monitoring of the water cycle over the Amazon using upstream/downstream dependency. Part 1: Methodology and initial evaluation., submitted to WRR, 2020.
• Pellet, V.,F. Aires, S. Munier, F. Papa, long-term estimate of the water storage change in the large Himalayan river basins from water budget closure, HESS, DOI: 10.5194/hess-24-3033-2020, 2020.
• Pellet, V., F. Aires, S. Munier, Optimisation of satellite observations to study the water cycle over the Mediterranean region, HESS, DOI: 10.5194/hess-2018-319, 2019.
• Pellet, V., and F. Aires, Analyzing the Mediterranean water cycle via satellite data integration, Pure Appl. Geophys, https://doi.org/10.1007/s00024-018-1912-zpp. 1-29, 2018.
• Munier, S., F. Aires, A new global method of satellite dataset merging and quality characterization constrained by the terrestrial water cycle budget, RSE, 2017.
• Munier, S., F. Aires, S. Schlaffer, C. Prigent, F. Papa, P. Maisongrande, and M. Pan, Combining datasets of satellite retrieved products. Part II: Evaluation on the Mississippi Basin and closure correction model, J. Geophys. Res., 10/2014, DOI: 10.1002/2014JD021953, 2015.
• Aires, F. Combining datasets of satellite retrieved products. Part I: Methodology and water budget closure, J. of Hydrometeor., 10.1175/JHM-D-13-0148.1, 2014.
There is an ever increasing number of satellite platforms collecting information for soil moisture estimation. These platforms have varying temporal, spatial, and vertical resolutions. The resulting observational and modeled products require in situ resources for ground based calibration and validation. But these resources are not fully deployed across a variety of landscapes and there is a lack of high resolution networks to validate the higher resolution products. Current modeled products are being produced at a resolution as fine as 30 m on a daily time scale. Other satellite only based products are approaching 1km in scale. These scales are approaching management scales for forestry and agriculture, but few study sites are available for providing ground truth to these products. A review of current resources will be presented and lessons learned from prior cal/val programs including the Advanced Microwave Scanning Radiometer (AMSR-E), the Soil Moisture Ocean Salinity (SMOS) mission, and the Soil Moisture Active Passive (SMAP) mission. Mission metrics for these products ranged from 4-6% depending on the product type. Initial efforts for calibration and validation included footprint scale soil moisture networks designed to capture a single satellite footprint scale estimate which would be maintained for several years to provide a continuous point of comparison. This strategy was complemented by intensive field campaigns over more highly monitored areas with intensive observation periods related to aircraft overflights. These evolved into multi-location and multi-period campaigns to maximize diversity of datasets and resources. In addition, new network resources have been developed including long term monitoring sites, including sites optimized for calibration and validation. There are still challenges in monitoring, including how to properly account for agricultural domains with active land management. Future missions that will be informed by these experiences will the NASA-ISRO Synthetic Aperture Radar Mission (NISAR) and ESA Copernicus Imaging Microwave Radiometer (CMIR).
The current generation of land surface models have a blind-spot when it comes to management impacts on agricultural lands. Farmers control many of the physical conditions that affect crop growth at scales beyond the resolving power of individual space-borne imagers. Think of field-scale irrigation or drainage compared to ranges of 1 to 15 km for microwave-based soil moisture estimates. Nutrients, crop-type, and drought resistance are some other key aspects that affect the actual role of crop-land in regional hydrological outcomes. Bottom-up modelling of hydrological processes therefore needs a high-resolution complement of satellite observations to provide top-down verification of model states. Fortunately, the availability of sufficiently high-resolution imagery in visible to infra-red spectral regions to capture croplands functional responses is slated to improve dramatically in the coming years, thanks to continued Landsat, Sentinel programs and new missions like NASA’s Surface Biology and Geology (SBG) and ESA’s Copernicus Land Surface Temperature Monitoring (LSTM).
This presentation will focus on research to support the adoption of thermal infrared imagery for providing improved monitoring of water use, drought resilience, and its integration within hydrological model frameworks. We will present a novel analysis of the spatial resolving power of current thermal imagers and implications for evaporation retrieval. Bridges were found to provide a sufficient thermal contrast with the water surface to quantify the 1D line-spread function of thermal imaging systems. The full-width-at-half-max of a gaussian beam model fitted to this transect quantifies the on-orbit spatial resolution of different imagers. This method is used to measure the spatial resolution of the Landsat 7, 8, and 9 thermal bands as well as those of the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS). The scanning ECOSTRESS imager is used to verify modelled pixel size increases with increasing scan angle.
The goal of this research is to facilitate an improved fusion of current and future satellite observations into harmonized products with superior temporal and spatial characteristics. We will outline a path that can provide analysis ready land surface temperature products, facilitating its use to improve the hydrological states in numerical models. This improved pipeline of temperature data will allow end-user-facing programs like Open-ET to scale-up their delivery of thermal-based evaporation products to water managers beyond the Western-US.
High-resolution evapotranspiration (ET) maps are crucial for planning and managing vegetation in cities to mitigate the urban heat island (UHI) and droughts. Although anthropogenic sources contribute to latent heat fluxes, the primary sources of terrestrial ET are plant transpiration and soil evaporation. Climatological conditions are the main drivers of ET, which require high-temporal data to capture its dynamic. On the other hand, impervious surfaces mostly constrain the ET released into the atmosphere. Therefore, a model to predict urban ET accurately requires high-temporal and -spatial resolutions. Processing all time-space interactions is challenging and demanding, but impervious areas are mainly static over a year. Based on these assumptions, we propose a two-stage modelling approach to deal with spatiotemporal variability without making the model overcomplex. The strategy is to derive hourly predictions focused on soil evaporation and plant transpiration using the SCOPE model (Soil Canopy Observation, Photochemistry and Energy fluxes) in the first stage. Then, we apply a landcover correction for the model assumption of homogeneous vegetation using high-spatial-resolution impervious and vegetation fractions in a second stage. This approach allows ET predictions in different spatial and temporal resolutions. The SCOPE predictions are based on high-temporal climatological inputs and moderate-temporal soil and vegetation properties such as soil moisture, leaf area index (LAI) and vegetation height derived from remote sensing products. We focus on modelling with open data, which is available for most medium and large cities of Europe. Compared with hydrology models designed for urban areas, the two-stage approach requires less detailed inputs, increasing model transferability. The two-stage approach was validated using eddy flux towers in two different locations in Berlin, Germany. The prediction accuracy is compatible with state-of-the-art in urban hydrological models (R2 0.82 and 0.54). The novelty of this study is to provide a solution that combines the temporal dynamic of ET in a vegetated environment with the spatially fragmented land cover of an urban environment, which is inexpensive and delivers a plausible ET high-resolution map that could be applicable for many cities in the world.
The recently released eWaterCycle platform provides an environment where hydrologist can work with each other’s models and datasets in a way that is ‘FAIR by Design’. The FAIR principles of data science dictate that all data and the software that generated it should be stored in such a manner that science is Findable, Accessible, Interoperable and Reproducible (FAIR). We achieve this in eWaterCycle by running versioned hydrological models inside software containers. These can then be accessed through a programming language agnostic interface (GRPC4BMI) using a Jupyter Python environment. This allows for communication with hydrological models that is independent of programming language.
The platform uses ESMValTool as a universal pre-processor for meteorological forcing data and earth observations. ESMValTool tracks provenance and metadata of all processing steps. The supported library consists of both Earth Observations and Earth System models (including CMIP). Integration of ESMValTool in eWaterCycle removes the need for custom pre-processing for each combination of forcing dataset, earth observation dataset, and hydrological model.
The open interface between the eWaterCycle platform and hydrological models and the use of ESMValTool as pre-processor facilitates changing either forcing or model with relative ease and (thus) makes multi-model, multi-forcing comparisons easier than before. This also extents to the evaluation of hydrological model states and fluxes using Earth Observation datasets or other applications.
We will present the preliminary results of a multi-model, multi-forcing comparison project as a showcase of the research that can be done ‘FAIR by design’ in the eWaterCycle project. In this study we compared simulated streamflow output of a diverse set of hydrological models that are available in the eWaterCycle project. The models are forced using the commonly used meteorological reanalysis datasets ERA-Interim and its successor ERA5. By comparing these two datasets we can analyze how the advances in reanalysis datasets have an effect on the simulated streamflow of hydrological models.
This study has been made possible by a community driven effort to conduct a large comparison study that is reproducible.
Synthetic Aperture Radar (SAR) images have a critical role to play in monitoring surface water at a global scale regarding their all-weather capability which allows real-time monitoring of water bodies. SAR sensors such as the Sentinel-1, TerraSAR-X, or RADARSAT constellations have been widely used for that purpose along with optical sensors. With the breakthrough of wide-swath altimetry, based on the interferometric processing of pairs of SAR images, that can be expected with SWOT KaRIn instrument, SAR sensor data can provide a two-dimensional water elevation image with a horizontal resolution of about 50 to 100 m, which enables new applications in hydrology.
However, the exploitation of this data is a considerable challenge, especially for narrow structures such as rivers or lakes with irregular shapes. SAR mages are affected by strong multiplicative speckle noise which can make the signal-to-noise ratio very low, especially for SWOT images. In addition, specific structures or artifacts (roads, layover…) can be harsh to distinguish from water. For these reasons, the extraction of water surfaces in SAR images using only the information contained in the image can be very challenging.
In contrast, combining this information with the data from other sources can significantly improve detection. To this end, images from other dates or even other sensors can be used, as well as in situ measurements or external databases, such as Allen & Pavelsky’s Global River Widths from Landsat (GRWL) that can be used to guide rivers detection.
We will present several methods that combine three strategies for improving the detection of water in SAR images:
• The use of prior guiding information in a way that is robust to discrepancies between the database information and the actual image (projection errors, temporal changes...).
• The combination of multi-temporal and multi-sensors data, taking into account the fact that the water extent may change over time.
• The denoising of the SAR images used as a preliminary step, in a way that preserves the small details and does not create artifacts.
While our methods have been primarily developed as risk mitigation options for the water detection step in the SWOT mission, they have also proven effective on Sentinel-1 GRD images, which are widely used for water monitoring, and should be applicable for any SAR sensor.
The first method we present exploits information from the SWORD river database used by SWOT (derived from GRWL) to detect narrow rivers in the image in a way that is robust to both noise in the image, potential errors in the database, and temporal changes. This method, presented for the first time in a 2021 IEEE JSTARS article, relies on a new linear structure detector, a least-cost path algorithm, and a Conditional Random Field segmentation method that we have developed to combine data attachment and regularization terms for the river segmentation problem.
As the detection of lakes with an irregular shape can be an issue as well, we also proposed a new method derived from the GrabCut approaches that uses an a priori polygon containing a lake to detect it on a SAR image or a time series of SAR images. This a priori polygon can be taken from a vector database or even from raster data (e.g., Pekel mask) Within this framework, we also studied the use of a multi-temporal and multi-sensor combination between Sentinel-1 SAR and Sentinel-2 optical images to improve the detection of small structures while preserving the temporal changes in the water extent.
Finally, we present an adaptation of our water detection approaches using a preliminary denoising step performed with a deep learning method, based on Dalsasso & al. SAR2SAR despeckling algorithm, and trained on Sentinel-1 GRD images. This denoising method, presented at the 2021 IGARSS conference, is very efficient for removing speckle noise while preserving small details, and does not produce the artifacts that are often associated with the spatial correlation of speckle and can impair the detection of water.
In conclusion, we demonstrate, through a few examples of methods that we proposed recently, that the use of external data (multi-temporal, multi-sensors, exogenous database...) can largely improve the detection of water in SAR images in situations where traditional methods, based only on the content on the image, reach their limits.
Soil moisture is an important variable for research on the water cycle. It controls the exchange of both water and energy between the land surface and the atmosphere (Vereecken et al., 2014). At high spatial and temporal resolutions, soil moisture observations can therefore be useful for applications such as precision agriculture (Vereecken et al., 2014), or the monitoring and prediction of hydro-meteorological disasters (Peng et al., 2021; van Hateren et al., 2021). Although space-borne Synthetic Aperture Radar (SAR) sensors allow for soil moisture estimations at high spatiotemporal resolution, their radiometry is limited by speckle noise (Lee, 1986). This is particularly problematic when soil moisture is estimated at high spatial resolutions, since variations in the backscatter intensity do not necessarily relate to variations in soil moisture. Speckle noise therefore needs to be reduced, either by spatial aggregation or dedicated speckle filters. Central to these methods is that they assume that the statistics (i.e. median, mean and/or variance) of the surrounding pixels are equal to the central pixel of interest (Mansourpour et al., 2006). This assumption does not hold when spatial variability in soil moisture or other soil parameters plays a significant role (Teuling and Troch, 2005). Moreover, due to the non-linearity of the backscatter vs moisture relationship, averaging the backscatter cannot be equivalent to averaging the moisture after retrieval. Applying speckle noise filters, especially at high resolutions, can therefore impact soil moisture retrieval accuracy.
For that reason, using unfiltered backscatter data in a soil moisture inversion model might be valuable in high resolution soil moisture applications. Ma et al., 2020 did briefly compare the use of unfiltered backscatter data to compute soil moisture (calculate-then-average) and the use of backscatter data that was filtered over the area of interest before soil moisture was estimated (average-then-calculate). They found that the calculate-then-average strategy showed significantly better results than the average-then-calculate strategy, at a spatial resolution of about 125,000 m2. Additionally, Satalino et al. (2004) use a synthetic dataset to show that increased accuracy of the calculate-then-average-strategy occurs especially when model errors are expected to be larger than 1.5 dB. These results strengthen our hypothesis that using unfiltered backscatter data (i.e. calculate-then-average) might increase soil moisture retrieval accuracies. It remains to be seen whether this hypothesis holds up at high resolutions (i.e. 400 – 10,000 m2), and in a real-world experiment.
In this study, we expand on existing research by demonstrating that the use of high resolution backscatter data can indeed lead to better estimates of soil moisture. A real-world experiment shows that differences between the calculate-then-average strategy and average-then-calculate strategy exist and how they are caused. The evaluation was based on an analysis at multiple resolutions, where satellite soil moisture estimates were compared to in situ soil moisture data from a high resolution field campaign set up in South-eastern Luxembourg. The field was sampled on a 20x20m2 resolution, under different field and vegetation conditions in 37 field visits spanning part of 2020 and 2021. Satellite soil moisture was retrieved from Sentinel-1 backscatter data using the MULESME algorithm, set up by Pulvirenti et al. (2018). MULESME is a change-detection algorithm based on the Oh forward model (Oh, 2004) and a multitemporal approach, that assumes that soil moisture content changes considerably faster than surface roughness. Any change in backscatter intensity can therefore be attributed to changes in soil moisture content and/or vegetation water content. SMC maps were retrieved from backscatter intensity data at 6 different resolutions (20x20, 40x40, 60x60, 80x80, 100x100, 120x120 m2) following two different strategies. For the average-then-calculate strategy, backscatter images were pre-processed, then resampled into the 5 lower resolutions, and finally the SMC was computed for each of these resolutions. Alternatively, for the calculate-then-average strategy, the SMC was computed at the highest resolution, after which the data were resampled into each of the 5 lower resolutions.
Differences between satellite and in situ data were quantified using skill scores such as the Pearson correlation and the RMSE . These were computed for the entire period over which field data were gathered. Correlations in the calculate-then-average strategy were found to be 0.36, 0.51, 0.54, 0.56, 0.57, 0.48 for the 20, 40, 60, 80, 100, and 120 meter resolution images, respectively. Similarly, ubRMSEs added up to 0.13, 0.11, 0.11, 0.10, 0.10, and 0.11. The two strategies do differ from each other. The calculate-then-average shows a better performance than the average-then-calculate strategy, and starting from a 40 meter resolution backscatter data provides the best results at both higher and lower resolution soil moisture data. RMSE data seem high, but this is mainly due to a large bias in the results.
It can be concluded that high resolution satellite data can be more valuable than low resolution satellite data when looking for patterns in soil moisture variability, despite the presence of speckle noise in high resolution data. Patterns can be described more accurately at high resolutions, and choosing high resolution data with slightly lower correlations can be useful in studies where spatial variation is important. Moreover, it was shown that the higher the backscatter data resolution, the higher the accuracy of the resulting soil moisture data, at both high and lower final product resolutions. This could have large implications in the field of remote sensing of soil moisture, since the average-then-calculate strategy is most often applied. To show how these two different strategies relate to the speckle noise limitation of SAR data, a synthetic experiment will be carried out with the MULESME algorithm.
References
van Hateren et al., ‘Ambiguous Agricultural Drought: Characterising Soil Moisture and Vegetation Droughts in Europe from Earth Observation’, Remote Sensing 13, no. 10 (2021): 1990.
Lee, ‘Speckle Suppression And Analysis For Synthetic Aperture Radar Images’, Optical Engineering 25, no. 5 (1986): 636–43.
Ma, Li, and McCabe, ‘Retrieval of High-Resolution Soil Moisture through Combination of Sentinel-1 and Sentinel-2 Data’, Remote Sensing 12, no. 14 (2020): 2303.
Mansourpour, Rajabi, and Blais, ‘Effects and Performance of Speckle Noise Reduction Filters on Active Radar and SAR Images’, in Proceedings of the ISPRS, vol. 36, 1 (ISPRS Conference, Ankara, Turkey, 2006), W41.
Oh, ‘Quantitative Retrieval of Soil Moisture Content and Surface Roughness from Multipolarized Radar Observations of Bare Soil Surfaces’, IEEE Transactions on Geoscience and Remote Sensing 42, no. 3 (2004): 596–601.
Peng et al., ‘A Roadmap for High-Resolution Satellite Soil Moisture Applications – Confronting Product Characteristics with User Requirements’, Remote Sensing of Environment 252 (2021): 112162.
Pulvirenti et al., ‘A Surface Soil Moisture Mapping Service at National (Italian) Scale Based on Sentinel-1 Data’, Environmental Modelling & Software 102 (2018): 13–28.
Satalino et al., ‘On the Accuracy of Soil Moisture Content Retrieved at Pixel, Segment or Field Scale, from Advanced-SAR Data: A Simulation Study’, in IGARSS 2004. 2004 IEEE International Geoscience and Remote Sensing Symposium, vol. 5, 2004, 3532–35 vol.5.
Teuling and Troch, ‘Improved Understanding of Soil Moisture Variability Dynamics’, Geophysical Research Letters 32, no. 5 (2005): L05404.
Vereecken et al., ‘On the Spatio-Temporal Dynamics of Soil Moisture at the Field Scale’, Journal of Hydrology, Determination of soil moisture: Measurements and theoretical approaches, 516 (2014): 76–96.
Satellite-measured surface soil moisture (SSM) has proved useful for improving understanding of the global water and energy cycles and strengthening land applications such as large scale hydrological modelling, numerical weather prediction (NWP), flood forecasting and drought monitoring and prediction. Despite the usefulness of existing SSM products, significant interest remains in improving the spatial resolution of SSM products to extend and facilitate applications such as mapping the impact of irrigation on local water budgets, assessing the impact of local SSM variability on atmospheric instability and improving NWP and hydrological modelling at regional scales (Peng et al., 2021).
Spaceborne Synthetic Aperture Radar (SAR) sensors are currently the most suitable systems to retrieve SSM at high spatial resolution at spatial scales ranging from local to regional and continental. In particular, the European Radar Observatory Sentinel-1 (S-1), developed in the framework of the Copernicus programme, systematically provides C-band SAR imagery from two identical spacecraft, (S-1 A & S-1 B), at high spatial and moderate temporal (6-day exact repeat cycle) resolutions with a sustained observation strategy for the next decades which foresees first the S-1 C & S-1 D satellites from 2022 onwards and then the S-1 Next Generation satellites from 2028 onwards (Torres et al., 2020).
This paper presents a pre-operational SSM product, derived from VV&VH S-1 observations at 1 km resolution and its validation status (Balenzano et al., 2021). The VH S-1 channel is used for the dynamic masking of vegetation, while the SSM retrieval is based on the VV S-1 observations. Only static information about land cover and soil texture is needed for SSM retrieval in addition to the S-1 backscatter. The SSM retrieval technique exploits the frequent revisit of S-1 to realize a time series based Short Term Change Detection (STCD) algorithm applicable for bare and vegetated areas dominated by soil attenuated scattering. The product consists of an estimate of surface soil volumetric water content [m^3/m^3 ] and its uncertainty [m^3/m^3 ], both at 1km.
An extensive validation study of the product has been conducted. The performance of the S-1 SSM product is estimated through a direct comparison between 1068 S-1 SSM images against in situ SSM measurements acquired by 167 ground stations located in Europe, America and Australia, over 4 years between January 2015 and December 2020, depending on the site. In the validation, an emphasis has been placed on addressing the spatial representative error (SRE) for S-1 SSM retrievals at 1 km. SRE comes from the spatial mismatch between the point-scale (~0.1 m) in situ measurements and the satellite estimates retrieved at resolutions of hundreds of meters (e.g., S-1 SSM). The impact of SRE on standard validation metrics, i.e., root mean square error (RMSE), Pearson correlation (R) and linear regression, is quantified and experimentally assessed using S-1 and ground SSM data collected over a dense hydrologic network (4-5 stations/ km^2) located in Southern Italy.
The paper reports on the results of the validation activity, presents examples of the developed S-1 SSM product over various sites and discusses possible integration with L-band SAR time series in the view of the availability of multi-mission and multi-frequency SAR data as those provided by S-1 and the forthcoming EU L-band Radar Observation System for Europe (ROSE-L) system (Davidson et al., 2019).
Acknowledgment
This research has been supported by the Scientific Exploitation of Operational Missions (SEOM) program of the European Space Agency, through the project “Exploitation of S-1 for Surface Soil Moisture Retrieval at High Resolution (Exploit-S-1)” (contract ESA/AO/1-8306/15/I-NB) and the Italian Space Agency through the project “Use of multi-frequency SAR data for AGRIculture” (SARAGRI), contract ASI N. 2021-6-U.O.
References
Peng, J., et al. “A roadmap for high-resolution satellite soil moisture applications – confronting product characteristics with user requirements”, Remote Sens. Environ. 252, 112162, 2021.
Torres, R., Davidson, M.W.J., Geudtner, D., “Copernicus Sentinel Mission at C- and L-band: Current Status and Future Perspectives”, 2020 IEEE Int. Geosci. Remote Sens. Symp. 4055–4058, 2020.
Balenzano A., et al. “Sentinel-1 soil moisture at 1 km resolution: a validation study”, R Remote Sens. Environ., 263, 112554, 2021.
Davidson, M.W.J., et al. “Copernicus L-band SAR Mission Requirements Document”, 2019.
The water cycle dictates the water availability in space and time through variability in its component parts: precipitation, runoff, evapotranspiration and storage change. Any change in one or more of these components may result in extremes of freshwater availability over land. Therefore, monitoring the water cycle components and ensuring that our observations are consistent with models and the integrated budget is essential for managing and understanding water resource issues. Closing the terrestrial water budget is, consequently, an important scientific goal and a Grand Challenge in hydrology.
The water budget equation is simply the balance of inputs and outputs: precipitation: runoff, evapotranspiration, and storage change, following the principle of conservation of mass. Our in-situ hydro-meteorological observation capability is limited, and the satellite or model-based products generally have poor spatiotemporal resolution and high uncertainties respectively. Therefore, the global water budget is yet to be closed comprehensively. Most water budget studies have been performed for a small region and/or with a small set of estimates for each component.
Here, we use several precipitation (CPC, CRU, GPCC, GPCP, GPM, MSWEP, TRMM, ERA5 Land, MERRA2), evapotranspiration (land surface models CLSM, Noah, VIC from GLDAS 2.0, 2.1, and 2.2; GLEAM, MOD16, SSEBop, FLUXCOM, ERA5 Land, MERRA2), and runoff (land surface models CLSM, Noah, VIC from GLDAS 2.0, 2.1, and 2.2; GRUN, ERA5 Land, MERRA2) datasets to obtain storage changes at the catchment scale and compare them against GRACE mass trends (Mascons and Spherical harmonic products) to assess the water budget for more than 180 river basins. With these datasets we were able to assess 1694 possible combinations, providing a comprehensive analysis of our ability to close the water budget with available hydro-meteorological datasets. Overall, we were able to close the water budget, with at least one combination, for more than 90% of the region under study. We also identify which datasets must be used to close the water budget in different regions and catchments based on climatic characteristics and other criteria.
Some of the results will likely be due to cancellation of errors between different datasets in a given combination. To address this issue, we also assess the performance of every dataset across the globe with respect to other datasets of the same variable. We find that no particular combination could be identified that performs consistently well over all basins, but GPCP precipitation, GLDAS CLSM evapotranspiration and GRUN runoff provided, overall, good water budget closure for many catchments. ERA land and MERRA2 evapotranspiration products perform better in the polar regions, while evapotranspiration from GLDAS2.2 yielded unrealistic values in most of cold (snow covered) regions.
Surface Soil Moisture (SM) plays a key role in the Earth water cycle and many hydrological processes (Koster et al. 2004), it is essential for accurate weather forecasting (Drusch et al. 2007, De Rosnay et al. 2013, Rodriguez-Fernandez et al. 2019) and agriculture management (Guerif et al. 2000). SM was also identified as one of the 50 “Essential Climate Variables” (ECVs) by the Global Climate Observing System (GCOS) in the context of the United Nations Framework Convention on Climate Change (UNFCCC) (GCOS 2015). Long time series of ECVs are crucial to monitor the Earth’s climate evolution, and this is the goal of initiatives such as the European Space Agency’s Climate Change Initiative (ESA CCI, https://climate.esa.int/en/).
The ESA SM CCI dataset (Gruber et al. 2019) provides time series for the 1979-2021 period in a 25 km resolution grid using scatterometers and passive microwave sensors. Based on extensive feedback from the user communities of SM products, a strong need for higher spatial resolutions SM data was identified (Dorigo et al. 2018, Peng et al. 2020). This includes climate applications such as assessment of climate change impacts at regional level.
Soil moisture can also be estimated at high spatial resolutions using Synthetic Aperture Radars such as Sentinel 1 (S1). Even if Sentinel high resolution time series are still short for climate applications (Sentinel 1A was launched in 2014 and Sentinel 1B in 2016), it is worth to evaluate the interest of such data set in the context of the ESA CCI as an additional SM high resolution data set and also for comparison with high resolution SM datasets that could be obtained by the downscaling of coarser resolution sensors.
In this contribution, SM maps at 1 km resolution produced using the S²MP (Sentinel-1/2 Soil Moisture Product) algorithm (El Hajj et al. 2017) are presented. The maps cover six 100 x 100 km2 regions over the Southwest and South East of France, Tunisia, North America, Spain and Australia. The S²MP algorithm is based on a neural network approach and exploits the synergic use of S1 and Sentinel 2 (S2). Backscattering coefficients and incidence angles from S1 as well as NDVI from S2 are used as input data. In the framework of this study, the algorithm was also adapted to use NDVI from Sentinel-3 (S3) instead of S2.
Both S1+S2 and S1+S3 1 km SM maps are compared to other high resolution SM data sets such as the SM and Soil Water Index (SWI) computed from S1 for the Copernicus Global Land Service and the SMAP + S1 product. The S1+S2 and S1+S3 SM maps are in very good agreement in terms of correlation (R > 0.9), bias (< 0.05 m3.m-3) and standard deviation of the difference (STDD < 0.025 m3.m-3) over the 6 regions of study. They also are well correlated (R ~ 0.6-0.7) with the Copernicus products over croplands and herbaceous vegetation land cover classes. However, the results are more mitigated over Tunisia and when the maps are compared to those of SMAP + S1. The correlation decreases significantly for mixed land cover pixels.
All the high-resolution products were also evaluated against in situ measurements along with coarse scale SM data sets (SMAP, SMOS, ESA CCI). The coarse resolution SM products show better correlation than the high resolution products except for the Copernicus SWI. However, the high resolution data sets, in particular the S2SM product, show a lower STDD and bias than coarse resolution data sets.
References:
- W. Dorigo, W. Wagner, C. Albergel, F. Albrecht, G. Balsamo, L. Brocca, D. Chung, M. Ertl, M. Forkel, A. Gruber et al., ESA CCI Soil Moisture for improved Earth system understanding: State-of-the art and future directions (2017), Remote Sensing of Environment, vol. 203, pp. 185–215.
- Drusch, M. Initializing numerical weather prediction models with satellite-derived surface soil moisture: Data assimilation experiments with ECMWF’s Integrated Forecast System and the TMI soil moisture data set (2007). Journal of Geophysical Research: Atmospheres (1984–2012), 112, D3, Wiley Online Library.
- De Rosnay, P., Drusch, M., Vasiljevic, D., Balsamo, G., Albergel, C., Isaksen, L. A simplified Extended Kalman Filter for the global operational soil moisture analysis at ECMWF (2013). Quarterly Journal of the Royal Meteorological Society, 139, 1199-1213, 674, Wiley Online Library.
- El Hajj M., Baghdadi N., Zribi M., Bazzi H. Synergic use of Sentinel-1 and Sentinel-2 images for operational soil moisture mapping at high spatial resolution over agricultural areas (2017). Remote Sensing, 9, 1292, doi:10.3390/rs9121292.
- GCOS, Status of the Global Observing System for Climate, Global Climate Observing System, World Meteorological Organization, Report 195, Tech. Rep., 2015.
- Gruber, A., Scanlon, T., Van der Schalie, R., Wagner, W., Dorigo, W. Evolution of the ESA CCI Soil Moisture climate data records and their underlying merging methodology (2019). Earth System Science Data, 137, Copernicus Publications.
- Guerif and Duke. Adjustment procedures of a crop model to the site specific characteristics of soil and crop using remote sensing data assimilation (2000). Agriculture Ecosystems & Environment, 81, 57-69. URL = https://doi.org/10.1016/S0167-8809(00)00168-7.
- Kerr, Y H., Waldteufel, P., Wigneron, J. P., Martinuzzi, J., Font, J., Berger, M. Soil moisture retrieval from space: the Soil Moisture and Ocean Salinity (SMOS) mission (2001). IEEE Transactions on Geoscience and Remote Sensing, 39, 1729-1735, doi = 10.1109/36.942551.
- Koster, Randal D., Dirmeyer, Paul A., and Guo, Z., et al. Regions of strong coupling between soil moisture and precipitation (2004). Science, 305, 1138-1140, 5687. American Association for the Advancement of Science.
- Peng, J., Albergel, C., Balenzano, A., Brocca, L., Cartus, O., Cosh, M. H., ... & Loew, A. A roadmap for high-resolution satellite soil moisture applications–confronting product characteristics with user requirements (2021). Remote Sensing of Environment, 252, 112162
- Rodríguez-Fernández, N., de Rosnay, P., Albergel, C., Richaume, P., Aires, F., Prigent, C., & Kerr, Y. SMOS neural network soil moisture data assimilation in a land surface model and atmospheric impact (2019). Remote Sensing, 11(11), 1334.
Two-source energetical modelling is a valid solution for heterogeneous fruit trees agricultural areas. In particular when working with distributed data, such as those from EO, being able to recognize the intra-pixel heterogeneity independently of the data spatial resolution is a crucial step in describing the energy fluxes involved in the water cycle. While most evapotranspiration models require remotely-sensed Land Surface Temperature (LST) as an input, FEST-EWB computes it as an internal variable, using LST only for calibration purposes. In this work, we have developed a two-source version of FEST-EWB, called FEST-2-EWB, characterized by three main perks: the possibility of performing continuous simulations without depending on EO data availability; the capacity to “disaggregate”, energetically speaking, heterogeneous pixel-wise data, producing separate crop and soil latent heat fluxes; estimating soil moisture continuously in time. We tested this new model over some agricultural test cases (both homogeneous and heterogeneous), demonstrating its higher potential for the representation of energy fluxes over heterogeneous agricultural areas. High resolution (< 10 m) proximal-sensing data of LST and vegetation indices are used.
This is a further demonstration of the utility of high-resolution EO data for improving ET estimates and further irrigation management.
Irrigation is an anthropogenic process recognized to be the principal reason for water withdrawals, globally accounting for 70% of total withdrawals (Gleick et al. 2009, Postel et al., 1996). Monitoring and understanding irrigation practices are fundamental in order to control and mitigate water demands, which is expected to particularly increase in semi-arid areas such as the Mediterranean region, where irrigation accounts for between 50 and 90% of total water demands and it is projected to increase between 4 and 18% by the end of the century (MedECC, 2020). While different studies were already performed on simply detecting irrigated areas, there is still no research on classifying different types of irrigation (e.g. flood irrigation, sprinklers or drip irrigation) based on remotely sensed data. This information has a critical scientific value since detailed information on irrigation greatly improves the modelling and understanding of the human impact on the water cycle, but it is also useful at the administrative level in order to monitor changes and optimize irrigation practices. In this context, we propose a novel approach for classifying different types of irrigation techniques from high-resolution remotely sensed data, through the use of a supervised AI model for time-series classification. Ground-truth data were collected during a field campaign in November 2020 around an intensely cultivated region in Catalunya, Spain. Around 300 fields inside a total area of around 80 km x 85 km were classified using 4 labels for the different irrigation techniques: sprinkler, flood, drip/subsurface and non-irrigated. Multiple AI models were tested using time-series of fundamental hydrological variables obtained from remotely sensed missions. The classification was performed using time-series from three different years in order to train the models with a more general and robust dataset, independent from specific meteorological conditions of a single year. The main finding of the research was that two hydrological variables modelled from satellite data, actual evapotranspiration (ET) and surface soil moisture (SSM), showed better accuracy when used for classification. ET showed an accuracy of 83.77+/-2.27% while SSM of 81.50 +/- 2.38 % and total accuracy of 85.29 +/- 2.47 % when combined together. A second finding of the research was that accuracy greatly improved when using modelled data at very high spatial resolution (20 m) with respect to raw satellite data from Sentinel-2, Sentinel-3 and SMAP. Finally, all the AI models employed for the classification showed that they were able to distinguish different types of irrigation, regardless of the different types of crops or trees present in each field. As a final result, this method allows creating yearly irrigation types maps at field level and for large areas, delivering detailed information on the status and evolution of irrigation practices.
REFERENCE:
Gleick, P. H., Cooley, H. & Morikawa, M. The World’s Water 2008–2009: The Biennial Report on Freshwater Resources (eds Gleick, P. H. et al.) 202–210 (Island Press, 2009).
Postel, S. L., Daily, G. C. & Ehrlich, P. R. Human appropriation of renewable fresh water. Science 271, 785–788 (1996).
MedECC (2020) Climate and Environmental Change in the Mediterranean Basin – Current Situation and Risks for the Future. First Mediterranean Assessment Report [Cramer, W., Guiot, J., Marini, K. (eds.)] Union for the Mediterranean, Plan Bleu, UNEP/MAP, Marseille, France, 632pp.
ABSTRACT The measurement of rainfall is an important prior for several operational and scientific applications, including weather forecasting, hazard prevention, agriculture, etc. Weather radars, such as NEXRAD, observe the reflectivity of an air volume and derivate the intensity of the precipitations at high-resolution. However, their capabilities are limited over the ocean. C-band SAR imagery, being sensible to the ocean surface roughness, has been known to be sensitive to the effect of the rain. In this study, we enhance existing NEXRAD/Sentinel-1 colocalizations and train and U-Net deep learning model to estimate the reflectivity of the NEXRAD radars from the Sentinel-1 observations. Rainfall predictions are returned as segmentations with thresholds at 1, 3 and 10 mm/h. Results indicate high performances at a wide range of wind speed and can therefor provide accurate rain estimation in the absence of weather radars. Index Terms— SAR, Sentinel-1, NEXRAD, ocean 1. INTRODUCTION Remote sensing of the precipitation is of interest for a wide range of application, from weather forecasting to agriculture. It can be measured by either spaceborn satellites (GPM, TRMM, ...) or ground weather radar. Weather radar are of particular interest as their fixed position simplify the colocalization with other instruments, but have a short range and are affected by the topography. Synthetic Apperture Radar (SAR) are spaceborn imagery systems able to measure the sea surface roughness at high resolutions. Sentinel-1 A and B, operated by the European Space Agency, amoung these instrument, have been routinely acquiring data since 2014 and 2016 respectively. [1] recently proved that it was possible to apply Koch’s filters [2], usually used to detect heterogeneous area of the SAR observations, to detect the presence of rain, by relying on a colocalized Sentinel-1/NEXRAD dataset. In this study, we further enhance the dataset proposed in [1], adapt the Koch’s filters to a multiclass segmentation, and train a deep learning model. The resulting model is able to accurately detect the different rainfall regimes, outperforming the Koch’s filters, though being sensible to the wind speed. 2. DATA The dataset is composed of both SAR observations and weather radar measurements, acting respectively as the inputs and the output groundtruths. SAR observations were acquired in the context of the Sentinel-1 mission, composed of two satellites, Sentinel-1A and Sentinel-1B whose instruments routinely acquire C-band observations (5.4 GHz). More precisely, we use Interferometric Wide Swath (IW) Ground Range Detected High Resolution products (GRDHR). These observations, with a spatial resolution of 20x22 m, expand for approximately 250 km in range and a few hundreds in azimuth. The weather radar measurements are obtained from NEXRAD, a network of Doppler weather radars whose band is between 2.7 and 3 GHz. The resolution is 1 km in range and 1° in azimuth. The initial colocalizations were performed by [1]. Colocalized IW are divided in patches of 256x256 pixels and downscaled to 100 m/px. Patches are removed if the colocalized NEXRAD measurement does not indicate precipitations of more than 1 mm/h in any part of the area. This step ensure that occlusions by the topography or overestimation of NEXRAD’s range does not introduce undetected rain patches. Once extracted, the patches are inspected to manually ensure the overlap between the rain signature on the SAR observation and NEXRAD’s measurement. The manual correction of the colocalizations highlights that the positional error increase with the distance to the radar (R2 = 40.4%). On contrary, neither the direction of NEXRAD’s ground stations, the wind speed nor its direction (obtained from ECMWF’s ERA5 model) correlate with the positional error vector. After cleaning the patches, the data is divided in training (79.5%), validation (9.6%) and testing subsets (10.9%). The dataset is divided at the IW-level to remove all data leaks and balanced at pixel-level to have the same distribution of both rainfall and wind speed in each subset. The number of IW for the training, validation and test subsets is respectively of 39, 7 and 7. The total amount of patchs in the dataset is 1570. Finally, the output groundtruths are tresholded to provide rainfall segmentations for the intervals [1, +∞], [3, +∞] and [10, +∞] mm/h. A secondary dataset containing Sentinel-1 observations colocalized with the Geostationary Ligthning Map (GLM) is also build. Though it does not contains information on rainfall, the lightning is known to be closely related to the precipitations [3, 4]. Also, as GLM cover the whole Western hemisphere with continuous observations, it is possible to obtain a large amount of colocalizations (189 IW whereas only 7 where present in the testing subset). These colocalization are used to evaluate the impact of both the wind speed and the incidence angle. 3. METHODS Koch’s filters [2] are a standard filtering method used in SAR imagery. They consist in four different high-pass subfilters that detect heterogeneous areas and indicates phenomena unrelated with the wind. [1] proved that it was possible to specifically detect the rain by optimizing the thresholds of the Koch’s filter on a NEXRAD/Sentinel-1 dataset. To be able to compare the Koch’s filter in the context of multiclass segmentation, we fine-tune the parameters of the Koch’s filters on the enhanced NEXRAD/Sentinel-1 colocalizations, training new filters for each rainfall regime. The deep learning model use the U-Net architecture [5]. It has been applied with success for semantic segmentation [6] and sea ice concentration estimation [7]. Our model contains three convolution blocks (each containing three convolutional layers activated by ReLU) of respectively 32, 64 and 128 3x3 kernels, followed by 2x2 max pooling layers. The central part is similarly composed of the convolution layers with 256 3x3 kernels. The upsample part of the network is the symmetric of the downsample, as common with U-Net architectures. The output layer is an convolutional layer with three 1x1 kernels, activated by a sigmoid function. To compare the binary Koch filters, originally designed for binary segmentation, each rain rate threshold is considered as threshold for a binary segmentation to compute a F1- score. The F1-score is also given in the multiclass framework to compare the fine-tuned Koch’s filter with the deep learning model. In both case, The F1-score is defined as the harmonic mean of the precision and the recall. The precision (resp. recall) is the mean diagonal value of the column-normalized (resp. row-normalized) confusion matrix. 4. RESULTS The results are compiled in Table 1. They are given with the standard deviation over five trainings. The table indicates that the U-Net architecture outperforms both the binary Kochfilter and the fine-tuned one. Best results were obtained when working with inputs of 400 m/px. The figure 1 indicates the result of the segmentation on whole IW. To obtain this segmentation, the IW is divided in overlapping tiles of 20x20 km, segmented by the model, and fused to retrieve the whole IW. This process takes approximately 15 seconds per IW on a GTX 1050 Ti. The figure shows agreement between the NEXRAD measurement and the prediction. The colocalizations with GLM allow to evaluate the F1- score of the deep learning method with the binary lightning map, acting as a proxy for the precipitations. The figure 2 indicates that the model is performing better at higher incidence angle. The influence of the incidence appear to be more important for stronger precipitation, especially for rain rates higher than 10 mm/h. The threshold at 1 mm/h, on the contrary, is not affected by the incidence angle. Higher wind speeds also decrease the performances of the model. This is caused by the lack of data at high wind intervals. 81.4% of the pixel in the training subset are at less than 8 m/s, 98.4% are at less than 12 m/s). The decrease of the performances is also explained by the direct effect of the wind. As both the wind and the rain increase the sea surface roughness, they negatively impact the SAR signature of the co-occurring phenomenon. It can be noted that the deep learning model obtains especially high results at low wind speed. On contrary, the binary Koch’s filter underperforms at wind speeds lower than 5 m/s. 5. CONCLUSIONS AND PERSPECTIVES The deep learning model trained on the enhanced version of the Sentinel-1/NEXRAD dataset is able to segment whole Interferometric Wide Swaths and retrieve the rainfall among four different regimes : [0, 1[, [1, 3[, [3, 10[ and [10, +∞] mm/h. It outperforms existing methods even when the state of the art is adapted for this multiclass segmentation problem and fine-tuned on this dataset. The qualitative assessment of IW segmentations highlights its significance in areas where the weather radar is absent or too distant to provide accurate measurements. In particular, the deep learning model obtains its best performances at low wind speed, on contrary with the Koch’s filters. The influence of the wind speed, and the weaker detection of rain event higher than 10 mm/h for low incidence angles, indicates that future works should concentrate on integrating these parameters as priors in the network. Additional colocalizations, especially at higher wind speed, could further improve the segmentation.
InSAR (Interferometric Synthetic Aperture Radar), a widely used method for monitoring ground deformation, measures phases of reflected radar signals. Some scattering phenomena produce phase signatures with no accompanying surface deformation. These can be viewed as nuisance terms, but more importantly they form signals that can be exploited to understand changes in water content and distribution, whether the water is in the form of soil moisture or stored in the living material of vegetation canopies. Here we present a model of complex radar backscatter, plus derived interferograms and radar triplet phase closure measurements, that relates the signal and product phases to interference of echoes from a surface and subsurface. These non-deformational phases result from changes in the dielectric constant of the medium lying between the surface scattering centers and those at depth.
Our models yield the layer dielectric constant as a function of moisture. We can select moisture profiles as a function of time and then compute interferogram and triplet closure phases predicted for InSAR observations. The phase of the expectation of the interferogram in these cases is not related to any surface deformation, rather it results from interference of the subsurface and subsurface echoes and is often interpreted as a nuisance phase to be eliminated when mapping of deformation is the goal. The interference is terrain dependent and diagnostic of the dielectric constant of a lossy medium and the depth of any subsurface scatterers.
We further compare our predictions to observations of phase closure in Sentinel-1 radar data. We illustrate two physical cases where nonzero phase closure arises as a result of scattering with a lower layer – first a subsurface layer under moist soil, and second interference between signals scattered from a vegetation canopy and the ground surface beneath. While we have not yet been able to produce a reliable inverse model, forward mapping shows that plausible values of moisture content reproduce the predictions from our method. We select moisture profiles as a function of time and then compute interferogram and triplet closure phases predicted for InSAR observations, and compare to measurements. We conclude that we may expect phase closure signals over both moisture-variable soils and vegetation canopies to readily observed in InSAR data.
Water managers around the world are constantly challenged to make decisions about water resources distribution and mitigation of floods and droughts in a highly dynamic and rapidly changing environment. Near-real time monitoring of the main hydrological fluxes in the water system is key for understanding its current state and dynamics and can help to support, optimize and adjust critical decisions in operational water management. Most crucial fluxes, such as precipitation, are routinely measured at the national scale and readily available for the spatial domain of the water manager. In contrast, information about actual evaporation is less frequently available, although it can greatly contribute to optimizing water resources management. The spatial resolution of actual evaporation data is often too low for practical use and many water managers rely on crude methods like potential evaporation in combination with crop factors. This is far from ideal, especially under the critical circumstances that occur during climatological extremes.
Martens et al. (2018) described the application of the Global Land Evaporation Amsterdam Model (GLEAM) to estimate high resolution (HR) evaporation over The Netherlands. In 2020, an operational version of GLEAM-HR was implemented, which provides daily actual evaporation at 100m resolution to support the Dutch water boards in their daily decision-making. The system includes a two-day forecast based on meteorological outlook from numerical weather predictions. With a three-month lag, a full reanalysis calculation is made to support policy making on past data. The system uses high resolution vegetation optical depth (VOD) to refine the vegetation stress component and data assimilation of L-band passive microwave satellite soil moisture to update the soil water status. Furthermore, a sophisticated open water evaporation module was added that includes assimilation of lake IJssel temperatures to estimate evaporation fluxes from the largest freshwater reservoir in The Netherlands.
Results show high a correlation between modelled and observed actual evaporation from in-situ eddy covariation stations in The Netherlands (Pearson’s R ranging from 0.75 for forest sites to 0.90 for grasslands) and lake IJssel evaporation at Stavoren (Pearson's R = 0.85). Further validation was performed using data from a network of in-situ soil moisture sensors, yielding correlations of > 0.8 for most sensors. For all variables, the data assimilation of satellite soil moisture improved the model performance significantly, most notably during dry spells. .
Further improvements to the system are planned, including sequential data assimilation of satellite soil moisture data from multiple sources, an improved representation of the rainfall interception process and the implementation of a groundwater component to better estimate evaporation wetlands and other shallow groundwater sites.
Martens, B.; De Jeu, R.A.M.; Verhoest, N.E.C.; Schuurmans, H.; Kleijer, J.; Miralles, D.G. Towards Estimating Land Evaporation at Field Scales Using GLEAM. Remote Sens. 2018, 10, 1720. https://doi.org/10.3390/rs10111720
Water reservoirs play an important role in relation to water security, flood risk, agriculture production, hydropower, hydropower potential, and environmental flows. However, currently long-term daily information on reservoir volume, inflow and outflow dynamics are not publicly available in (near)real-time. Using cloud computing infrastructure and a high resolution distributed hydrological model wflow_sbm, we present a modelling approach to simulate historical daily reservoir variations for 3236 headwater reservoirs across the globe for the period 1970-2020. The results derived with wflow_sbm model forced with various forcings sources based on observations and reanalysis (EOBS, CHIRPS, NLDAS, BOM, ERA5) are compared with: 1) measured discharge observations, 2) in situ reservoir elevation and volume measurement, and 3) volume estimates derived using satellite observations. Overall good comparisons between the hydrological model and the different measurement sources are observed, although considerable variations do occur. The analysis also enables to assess long-term changes in global reservoir dynamics.
Water accounting is an important methodology that can provide users and decision makers with valuable information on water flows and water availability in a given study area. The Water Accounting Plus (WA+) framework was developed through partnership between IHE- Delft, the Food and Agriculture Organization (FAO), and the International Water Management Institute (IWMI). It typically relies on remote sensing data that is openly accessible in order to make water accounting analysis feasible in data scarce regions. The Water Productivity Open-access Portal (WaPOR) is particularly used in water accounting. This remote sensing dataset offers spatially-distributed information on water fluxes including precipitation, evaporation, interception, transpiration, reference evapotranspiration, and actual evapotranspiration and interception, as well as crop information such as biomass water productivity. WaPOR data, which is currently available for Africa and the Middle East, provides data from 2009 to date at daily, decadal, monthly, and yearly timesteps and at three levels of spatial resolution for sub-national, national, and continental scales: At 30 m, 100 m, and 250 m resolutions respectively. Research focused on Africa concluded that WaPOR was among the most accurate remote sensing products when it comes to long term ET estimation. In this study, the WA+ framework was applied to the Mara transboundary catchment and the Kikuletwa catchment by utilizing outputs from a hydrological model built on SWAT+. This approach to water accounting using a hydrological model is interesting since it offers the possibility to simulate future water availability under projected climate change or landuse change scenarios. In this analysis, remote sensing datasets such as WaPOR, CHIRPS, EWEMBI, as well as data from the FAO, ESA and NASA (SRTM) were used as inputs to the SWAT+ model. The Resource Base and Evapotranspiration water accounting sheets were generated for the Mara river basin (Kenya and Tanzania) and the Kikuletwa basin (Tanzania). These results provide information on water availability and water use in the Mara and Kikuletwa catchments.
Increased industrial development in the Arctic has led to a rapid expansion of infrastructure in the region. Past research shows that infrastructure in the form of roads, pipelines and various building types impacts the surrounding landscape directly and indirectly by changing vegetation patterns, locally increasing ground temperatures, changing local hydrology, introducing road dust into the natural environment, and affecting the distribution and timing of seasonal snow cover. Localized impacts of infrastructure on snow distribution and snowmelt timing and duration feeds back into the coupled Arctic system causing a series of cascading effects that remain poorly understood. In this study, we quantify spatial and temporal patterns of snow-free timing in the Prudhoe Bay Oilfield (PBO), North Slope, Alaska, using multispectral remote sensing data from the Sentinel-2 satellite constellation. Using the Normalized Difference Snow Index (NDSI) we quantify the presence and absence of snow on a pixel-by-pixel basis for the years 2019 and 2020 and derive the last day per year that snow was present for any given pixel in the study area. Additional indices, like the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Water Index (NDWI), were derived to understand linkages between patterns in vegetation productivity and surface hydrology, respectively, to patterns in snow-off timing. Regarding surface hydrology, we specifically focussed on early season surface water distribution. In addition to the linkages of snow melt, vegetation and hydrology, the relationship to infrastructure was of special interest. Recently published infrastructure data sets derived from Sentinel-1 and 2 data were used to quantify differences in snowmelt patterns in relation to distance to roads and other types of infrastructure. Results from our regional remote sensing analysis show a relationship between snow-off date and distance to different types of infrastructure that vary by their use and traffic load during the snowmelt period, as well as their orientation relative to the prevailing wind direction. Post snowmelt surface water area showed a strong correlation with distance to nearest infrastructure. Results from field data observations indicate an impact of infrastructure on winter ground surface temperature and snow depth. The availability of high-resolution satellite imagery as well as high resolution infrastructure data proved to be essential in quantifying spatial and temporal patterns of snow melt timing in relation to infrastructure elements. This study highlights the impact of infrastructure on a large area extending past the direct human footprint as well as the interconnectedness of the different studied variables.
Global surface soil moisture (SSM) datasets derived from satellite observations in the microwave domain are nowadays available at a variety of resolutions and cover a variety of periods. Most of these records only incorporate measurements from a single satellite (e.g. from SMOS) and are therefore limited by its lifespan. However, there are ways to generate long-term data records by harmonising the observations of multiple (active and passive) satellite sensors (e.g. ESA CCI SM). The Copernicus Climate Change Service (C3S) has adapted these methods and currently provides a more than 40-year long SSM record with a delay of only 10-20 days.
However, satellite-based soil moisture observations are limited to the topmost layer of the soil (0-5 cm) and do not provide information on water content in the root-zone (0-100 cm). Root-zone soil moisture (RZSM) controls evapotranspiration and is a key parameter for closing the water cycle, studying hydrological processes as well as drought monitoring and forecasting. While at present no satellite mission is capable of measuring RZSM directly, it can be approximated from SSM using land surface models.
Alternatively, conditions in the root-zone can also be propagated from SSM using Soil Water Index (SWI) – a simplified two-layer infiltration model – by smoothing and delaying SSM temporal dynamics using an exponential filter. The method’s single parameter T (temporal lengths ruling the infiltration) is assumed to represent all environmental and climatic factors controlling the soil moisture’s temporal dynamics. While it has been demonstrated in a selection of studies that T generally increases with soil’s depth, the influence of other processes remains contested.
Here we introduce the advances in our work on the operationally capable, depth-translated and error characterized 0-100 cm soil moisture dataset from the C3S Soil Moisture v202012 COMBINED product. The uniqueness of this global dataset lays in the fact that the SWI characteristic T-values were translated into particular soil depths based on correlation metrics with in situ measurements. This makes the data much more intuitive, easily applicable and user friendly compared to SWI products for various T-values. Available are volumetric soil moisture values for the top 1 m of the soil profile at 10 cm intervals with uncertainty estimates obtained with an exponential filter-adapted error-propagation technique.
Additionally we present the results of an extensive global validation against in situ measurements from the International Soil Moisture Network (ISMN) and modelled SM from ERA5-Land.
Seasonal changes of temperature and precipitation cause inland open surface water and ice cover extents to vary dramatically through the year on local to global scales. These dynamics of land, water, and ice have a significant impact on climate and often are critical to natural ecosystem functioning. However, global seasonal dynamics of both water and ice extent have not been well quantified. Here, we present the quantification of monthly surface water and ice areas for 2019 with associated uncertainties. Time-series reference data were created for a stratified probability sample of 10 m grid cells by interpreting the entire 2019 time-series of 10 m Sentinel-2 data and a subset of 3 m PlanetScope data in selected places with a mix of land and water. From the probability sample reference data, we estimate that 4.86 ±0.16 million km2 had inland water presence at some point during the year. Globally, only 23% of the total area with water was permanent water that remained open year-round (1.13 ± 0.19 million km2). Permanent water with seasonal ice cover extended 1.97 ± 0.21 million km2, comprising 41% of the total area with water. Seasonal water-land transitions (both with and without ice/snow cover) covered the remaining 36% of the total area with water (1.76 ± 0.19 million km2). February had the maximum extent of ice over areas of inland permanent and seasonal water, totaling 2.49 ± 0.25 million km2, and January – March had a larger global extent of ice cover than of open water. Additionally, previous methods of accounting for year-round water that ignore ice presence result in estimates 2.5 times larger than the area of year-round open water. To investigate the spatiotemporal distribution of ice cover and the suitability of Landsat, prototype maps of surface water ice cover phenology were created by integrating the ice/snow and no data labels from the quality assurance layer of the GLAD ARD of Potapov et al. (2020) with the 2019 surface water layers of Pickens et al. (2020), both of which are Landsat-based. While limited by data availability, these maps reveal high spatiotemporal variability of ice phenology, with multiple-month disparities in ice-off for neighboring lakes of different sizes. As the timing and duration of ice cover have substantial implications for emissions and other climate feedbacks, improved maps will be able to improve climate predictions by enabling better models of the relationships between ice phenology, water body size and type, and climate variables. With near-daily observations near the poles and 10 m resolution bands, Sentinel-2 provides unprecedented potential to examine surface water and ice dynamics for 2016 forward and to investigate the drivers and impacts of this variability.
Abstract
Drought monitoring and prediction are a key step in drought management, requiring appropriate indicators to be defined by which different types of drought can be identified. Meteorological, agricultural and hydrological drought indicators are available to characterize different types of droughts. The goal of this research is to find the relationship between meteorological drought (based on rainfall deficit and the length of the dry period) and hydrological drought (impact of rainfall deficits on groundwater table decline). The Standardized Precipitation Index (SPI) is one of the most widely used evaluation indicators in the field of meteorological drought. The Groundwater Resource Index (GRI), as a reliable tool in a multi-analysis approach for monitoring and forecasting drought conditions, also used as hydrological drought. Groundwater drought resulting from the decrease in groundwater resources such as recharge, storage and discharge. In this study, drought status and its impact on groundwater resources was investigated in Brojerd-Drood Plain which located in Lorestan Province in west-center of Iran, by using SPI at (1, 3, 6, 9, 12, 18, 24, 36 and 48) monthly scales and the GRI during the statistical period of 2001-2018. For SPI the rainfall of both cities and for GRI 30 observed piezometers were used. After cleaning the data, the SPI and GRI in GIS calculated. Trend and de-trend applied on data analysis, to remove the effect of seasonality. Statistical analysis showed that SPI-24-time scale have the highest correlation coefficient with GRI as well as GWL with about 0.70. During drought condition the value of SPI and GRI could reach the value of (-2), while in wet condition the value could reach almost 3. The results of regression model showed that up to 70 percent of variations in GRI index explained and justified by SPI index. This is due to other factors such as uncontrolled exploitation of groundwater resources on the loss of groundwater and groundwater index linked. The maps also showed that the decline in groundwater levels across the plain, an average of 20 meters and rising in ground water levels in the range of 4 meters.
Key words: Drought, Standardized Precipitation Index (SPI), Groundwater Resource Index (GRI)
The LSA SAF evapotranspiration (ET) product has been operational since 2009, generating instantaneous (every 30 minutes) and daily evapotranspiration estimates over the MSG field of view including Europe, Africa and a part of South America (http://lsa-saf.eumetsat.int). Based on the continuous research carried out by the development team over the years, various uncertain parameters have been improved (mainly dealing with the impact of vegetation characteristics and soil moisture on energy and water vapour fluxes), giving rise to new versions of the algorithm. Each new version of the algorithm has been validated under a wide range of environmental and climatic conditions, to ensure that the new version provides equivalent or better results than the previous one. More recently, sensible (H) and latent (LE) surface heat fluxes (which have been generated as a by-product for many years) have been examined and included in the LSA SAF portfolio of products. These fluxes are very important since they contribute to ensure the closure of the energy balance at the surface and since they are now part, together with ET, to the essential climate variables (ECV).
In order to generate a homogeneous and continuous dataset that covers the MSG period, the latest version of the algorithm was used to generate an ET and surface fluxes (SF) data record that covers the period 2004-2020. The ET and SF variables may be of great interest for multidisciplinary studies in the context of evolving climate. The quality of the CDR products was verified by validating the simulations with measurements from eddy covariance stations located in a variety of ecosystems and comparing the CDR values with other products.
In this contribution, we will provide details on the algorithm used for the reprocessing of ET and SF, the forcing data used and the output generated. The results of the product validation will also be presented and some examples of potential applications will be shown.
Water bodies are considered of importance in the context of global change, being also sensitive to climatological and meteorological conditions. Earth observation plays an important role in assessing and monitoring the water characterization parameters like height, extent, temperature and radiance. In this context we developed ExtractEO, a software implementing automated end-to-end chains on satellite data, with a focus on Sentinel1_3, having several chains implemented, from water and fire extraction to cloud detection in a time series context with large spatial and temporal windows. Water surfaces are detected using a multilayer perceptron (neural network) algorithm and integrating the GSW database for sampling. Validation is done both exploiting exiting high resolution database with the known limitations of their representativity, as well based on VHR optical imagery such as Pleiades. Examples over a single complex large lake and over hundreds of small reservoirs and pounds are presented and discussed.
Lake Fitri, located in the Sahelian band (Chad), is a flat-bottomed lake with high intra- and inter-annual variability linked to the variability of the West African monsoon. We focused on the recent period, exploiting Sentinel-2 time series from January 2017 to February 2021 and again covering the 2021 year. From the more than 300 Sentinel2 selected and downloaded from the CREODIAS platform, water mask and a cloud mask have been derived. In addition, for understanding the inter and intra-annual dynamics of the lake, occurrence products, have also been derived. The variations in the lake's surface area are impressive, with a minimum surface area of 194 km² and a maximum of 1249 km². Furthermore, water surfaces’ dynamics are similar for the years 2017, 2018 and 2019. In 2020, the rise in water level is significant since the maximum area observed this year is twice the maximum area observed in the three previous years. In 2021, the lake dynamic is like the first years of analysis. These results as a whole offer very rich information for the study and management of this lake. A combination with altimetry data is being carried out by to derive hypsometric curves and extract volume variations. Such regular monitoring of the flooding of Lake Fitri, would provide an excellent indicator of climate trends in this poorly studied region, but also of resource stresses due to population growth.
The second test was done over 280 lakes larger than 10ha, 2 within the 57,000 km2 of the Grand Est region. This experiment is rich in terms of feedback on data accessing, as DIAS failed, and it have been necessary to exploited GEE database rather than a European platform. Covering the period 2017-21, about 2800 Sentinel2, covering 16 tiles were downloaded exploiting FORGE. From a processing point of view, analysis highlights also the heterogeneity in terms of temporal and spatial granularity of observation, some year more than 850 images have been exploited and only 680 another year, by the way some areas have been observed more than 80 time during a year whereas others lake have been investigated only 30 times for the same period From a thematic point of view, this provide an unique monitoring of water bodies highlighting reservoirs by reservoirs, lakes by lakes their own dynamics, with for some ones the characterisation of a very long assec period inducing major change in the landscape evolution and by the way biodiversity stakes.
These two cases, carried out over different landscapes and water bodies type, highlight the success of ExtractEO toto automatically derive water and cloud masks of very good quality on Sentinel2 time series, to compute the surfaces to build a tracking curve and, finally, to compile occurrence products.
Climate variability exerts profound influences on the water cycle, and therefore society. In order to adapt to new risks and resources in a changing climate, it is necessary to develop tools able to characterize the natural climate variability and pinpoint the mechanisms triggering changes observed in the climate system. In this study, the fingerprints of eight climate modes are detected in the global water cycle observed with the GRACE and GRACE-FO missions. To assess the robustness of the relationship between climate modes and the water mass changes observed from space, we used a Least Absolute Selection and Shrinkage Operator (LASSO) regularization, performing an efficient selection of the relevant predictors of the climate variability among the candidates considered. The El Nino Southern Oscillation (ENSO), Southern Annular mode (SAM) and Arctic Oscillation contribute significantly to the interannual ocean mass variations in extratropical-basins (up to 25 mm) and in shallow seas (up to 70 mm). Over the continents, a large part of the interannual variability of the terrestrial water storage (up to 100 mm) can be attributed to ENSO, Atlantic Multidecadal Oscillation, Pacific Decadal Oscillation (PDO) and North Pacific Gyre Oscillation. One important result of this study lies in our ability to track interannual water mass displacements across different reservoirs. For example, we can link the transport of water from intertropical regions to the Southern Ocean, where it contributes to the interannual variability of ice mass changes in West Antarctica in connection with ENSO, SAM and PDO. However, significant residuals in the satellite gravity measurements remain unexplained at interannual time scales after removing hydrological signal using empirical models for the climate modes as well as global land surface models. More complex models solving the water mass balance should be employed to better predict the variability of water mass distributions. The climate modes predictions based on LASSO inversions could still be used to reduce the interannual variability in satellite gravity measurements and detect processes unrelated with climate modes but with similar spatio-temporal signatures.
We perform an Observing System Simulation Experiment that simulates the satellite sampling and the mapping procedure on the sea surface of the high-resolution model CROCO-MED60v40, to investigate the reliability and the accuracy of the eddy detection. The main result of this study is a strong cyclone-anticyclone asymmetry of the eddy detection on the altimetry products AVISO/CMEMS in the Mediterranean Sea. Large-scale cyclones having a characteristic radius larger than the local deformation radius are much less reliable than large-scale anticyclones. We estimate that less than 60% of these cyclones detected on gridded altimetry product are reliable, while more than 85% of mesoscale anticyclones are reliable. This asymmetry comes from the difference of stability between cyclonic and anticyclonic eddies. Large mesoscale cyclones often split into smaller sub-mesoscale structures having a rapid dynamical evolution. The numerical model CROCO-MED60v40 shows that this complex dynamic is too fast and too small to be accurately captured by the gridded altimetry products. We also confirm that the AVISO/CMEMS products induce a bias on the eddy intensity. The azimuthal geostrophic velocities are always underestimated for large mesoscale anticyclones. This study shows the biases that can be induced by the use of gridded altimetry products which are often considered as reliable observational data sets for large mesoscale structures. However, the fast and small scale structures, that cannot be resolved by standard altimetry, have a clear signature on the high-resolution Sea Surface Temperature (SST). To reduce the bias and errors, induced by the optimal interpolation scheme, we combine the standard eddy detection on AVISO/CMEMS products with an advanced IA analysis of SST images. The latter is based on deep learning methodology and the use of high-resolution numerical model. This new approach greatly improves the reliability of automatic detection of mesoscale eddies and most of the true eddies having a characteristic radius larger than 20km were correctly detected.
The ocean surface circulation involves a superposition of processes acting at widely different spatial and temporal scales, from large-scale, slowly varying geostrophic flow to mesoscale turbulent eddies and, at an even smaller scale, mixing generated by the internal wave field. As part of the Copernicus Services, the Thematic Assembly Centres (SL-TAC & MOB-TAC) deliver near-real time and delayed time sea level and surface currents gridded products that are used by the ocean science community to study and understand the evolution of the ocean system. These products are based on an interpolation method optimized for mapping mesoscale variability and have resolution limits of about 200 km x 20 days.
To better serve the Copernicus users and decision-makers, and to answer the growing need for higher resolution products, the development of new experimental products has been undertaken, with the support of the French Space Agency (CNES), aiming at improving the resolution of the current sea level products and preparing operational systems for the SWOT era.
Here, we present a new gridded sea surface height and surface current dataset produced by combining observations from nadir altimeters and drifting buoys. This product is based on a multiscale/multivariate mapping approach, which aims to improve the resolution of operational products provided by Copernicus services and offers the possibility to study mesoscale circulations and equatorial wave dynamics. The dataset covers the entire global ocean and spans from 2008-01-01 to 2019-12-31. The multi-scale approach decomposes the observed signal into different physical contributions. In the present study, we simultaneously estimate the mesoscale ocean circulations as well as part of the equatorial wave dynamics (e.g., tropical instability and Poincaré waves). The multivariate approach is able to exploit the geostrophic signature resulting from the synergy of altimetry and drifter observations. Drifter observations can potentially improve surface circulation in areas not or poorly sampled by altimeters. In addition, altimeter observations in Arctic leads are also used in the merging to improve the sea surface height in this poorly mapped region.
A quality assessment of this new product is proposed against the product distributed in the Copernicus Marine Service. We show that the multi-scale mapping approach offers promising perspectives for surface ocean circulation reconstructions. Despite the limited number of observations in the polar regions, the multiscale approach provides gap-free maps to users. The mesoscale circulation is better mapped in the new product. The mapping errors are significantly reduced in regions of high variability, and in the equatorial band, associated with improved high frequency sea surface height mapping (Poincaré waves). The effective resolution of this new product is hence between 5% and 10% finer than the Copernicus product. The drifter’s observations contribute moderately to the improvement of the sea surface height mapping (1% on average), but their contribution on the geostrophic velocities is more important.
Measuring the ocean surface currents at high spatio-temporal resolution is crucial for scientific and socio-economic applications. Since the early 1990s, the synoptic and global-scale monitoring of the ocean surface currents has been provided by constellations of radar Altimeters.
By construction, Altimeter constellations provide only the geostrophic component of the marine surface currents. In addition, given the effective spatial-temporal resolution of the Altimeter-derived products (O (100 km) and O (10 days), respectively), only the largest ocean mesoscale features can be resolved. In order to enhance the Altimeter system capabilities, we propose a synergistic use of high resolution sea surface Chlorophyll observations (Chl) and altimeter-derived current estimates. The backbone of our approach is the ocean currents reconstruction method developed by Piterbarg (2009) and extensively used in past studies with Sea Surface Temperature satellite data.
The study is focused on the Mediterranean Sea, where the most energetic signals are found at spatio-temporal scales up to 10 km and a few days. The proposed method allows for inferring the marine surface currents from the evolution of the Chl field, relying on altimeter-derived
currents as a first-guess estimate. The feasibility of the approach is tested through an Observing System Simulation Experiment, based on physical and biogeochemical model outputs distributed by the European Copernicus Marine Service. Statistical analyses based on one year (2017) of daily data showed that our approach can improve the Altimeter-derived currents accuracy up to 50%, also enhancing their
effective spatial resolution up to 30 km. Moreover, the retrieved currents exhibit larger temporal variability than the altimeter estimates over annual to weekly timescales. Our method is mainly limited to areas/time periods where/when Chl gradients are large and modulated by the
marine currents’ advection. Its application is thus more efficient when the surface Chl evolution is not dominated by the biological activity, mostly occurring in the mid-February to mid-March time window in the Mediterranean Sea. Preliminary tests on the method applicability to satellite-derived data are also presented and discussed.
Ocean eddies play an important role in the transport of heat, salt, nutrients or pollutants. During a finite-time advection, the gradients of these tracers can increase or decrease, depending on a growth rate and the angle between flow gradients and initial tracer gradients. The growth rate is directly related to finite-time Lyapunov exponents, and to the time-integrated Lagrangian mesochronic velocity diagnostic proposed by Mezic et al. (2010)
Numerous studies on mixing and/or tracer downscaling methods rely on satellite altimeter-derived ocean velocities. Mixing analyses identify the oceanic Lagrangian coherent structures (i.e. the water subdomains which stay isolated from each other without mixing). Downscaling algorithms increase the spatial resolution of coarse-scale tracer satellite images through numerical advection by those altimeter-derived oceanic currents (e.g. Sutton et al., 1994, for the atmosphere or Desprès et al., 2011a,b for the ocean).
Filtering most oceanic small-scale eddies, those resulting smooth Eulerian velocities are often stationary during the characteristic time of tracer gradient growth (about one or two weeks).
While smooth, these velocity fields are still locally misaligned, and thus uncorrelated, to many coarse-scale tracers observations amendable to downscaling (e.g. SST, SSS).
Using finite-time advections, the averaged squared norm of tracer gradients can then only increase, with local growth rate independent of the initial coarse-scale tracer distribution. The key mixing processes are then only governed by locally uniform shears and foldings around stationary convective cells.
To predict the tracer deformations and the evolution of their 2nd-order statistics, an efficient proxy is proposed. Applied to a single velocity snapshot, this proxy extends the Okubo-Weiss criterion.
For the Lagrangian-advection-based downscaling methods, it successfully predicts the evolution of tracer spectral energy density after a finite time, and the optimal time to stop the downscaling operation.
A practical estimation can then be proposed to define an effective parameterization of the horizontal eddy diffusivity.
New satellites and sensors arose during the past decade, enabling to observe a wide range of oceanic physical variables at various scales. For example, the Sentinel 1-2-3-6 program covers various sensors such as SAR, Ocean Color, Temperature brightness or altimeter with an individual long revisit time but with a quite rapid revisit from a constellation point of view. We also benefit from geostationary sensors such as SEVIRI to retrieve Infra Red SST every hour and thus improve greatly the coverage in cloudy areas. All these variables contain important information regarding the upper ocean dynamics even indirectly. Thus there is a need to find new avenues to exploit the wealth of available remote sensing data by considering them as structured information rather than pointwise data. Moreover, handling this huge heterogeneous dataset has never been easier using the Ocean Virtual Laboratory online portal (https://ovl.oceandatalab.com) or the standalone version (https://seascope.oceandatalab.com), enabling to visualise and analyse heterogeneous satellite and in-situ observations collocated in time and space.
In a first part, we will show how to detect dynamical structures in tracer images and use high resolution remote sensing observation to perform quantitative surface current assessment.
Velocity derived from models and from remote sensing observation will be compared and assessed using frontal structures derived from SST images.
Then we will demonstrate few examples of data synergy using remote sensing observations and in-situ data on the Ocean Virtual Laboratory. We will show examples of the displacement of frontal structures using Sentinel 1 SAR roughness, OLCI chlorophyll, SLSTR and SEVIRI SST.
Finally, we will consider all the remote sensing and in situ observations and show how to easily build synoptic chart of upper ocean dynamical structures using the Ocean Virtual Laboratory. We will illustrate the impact of the use of the wide variety of remote sensing observation on ship routing decision and show how shipping companies can benefit from these analysis.
Eddies can originate nearly everywhere in the ocean and play an important role for heat transport and distribution of biogeochemical properties. At the surface, mesoscale eddies are identified from satellite altimetry data, where depression (elevation) in the sea-level anomaly field reveals a cyclonic (anticyclonic) structure. Their signature can also be retrieved from multi-sensors, e.g. in SAR data through surface roughness, and imaging spectrometers and radiometers through patterns in chlorophyll and sea surface temperature distribution. However, the representation of ocean mesoscale dynamics is strongly limited by the spatial coverage and temporal sampling of a given satellite mission. Merging multiple satellite altimetry missions makes it possible to obtain maps of mesoscale variability, but doesn`t allow for resolving spatial scales of variability in the ocean smaller than 30-50 km. A multi-sensor approach can improve the detection of eddy-induced variability and the characterization of the fine scale structure of the eddies. In this study we focus on the mesoscale features occurring in the Lofoten Basin, considered as a hot-spot for mesoscale eddies in the northern high-latitude seas. In particular, we build on the synergy between different satellite missions (e.g. Sentinel-1, Sentinel-3, MODIS) and in-situ measurements (i.e. Argo profiling floats), to study the co-variabilities of the eddy-induced anomalies, modulating the coupling between physical and biogeochemical processes in the ocean. Here a workflow for data acquisition and co-location is proposed by using the Geo-Scientific Platform as a Service (GeoSPaaS) data management system developed at the Nansen Environmental and Remote Sensing Center (NERSC). The GeoSPaaS-tool allows for (i) harvesting metadata from a number of scientific data repositories, (ii) defining specific spatial and temporal domains of interest and (iii) co-locating complex data-set. This workflow represents an efficient way for building a unique multi-sensor data-set to investigate the role of eddies in the upper ocean and train algorithms to describe their 3-dimensional structure.
Over the past years, altimetry satellite observations have provided a good understanding of the global surface circulation, resolving structures with a wavelength scale of 100 km (a radius scale of ~25 km) . However, vertical velocities are difficult to measure directly and their prediction is challenging. The identification of the 3D pathways that connect the surface ocean to the interior and their magnitude is pivotal for predicting the distribution of pollutants, oxygen, heat, carbon and other biogeochemical tracers. Vertical transport occurs localized in space as well as episodic and short duration (from hours to days). Recent studies have shown that this transport is influenced by flows at many scales and demonstrate that vertical motions associated with horizontal scales smaller than 10 km play a significant role in vertical exchanges. Climate models do not reproduce adequately this transport, as they do not have the accuracy needed to resolve the submesoscale processes over extended domains and long time periods.
Here we use drifter observations to calculate divergence and vertical velocities in the upper 15 m of the water column. The drifters were deployed at the edge of a mesoscale eddy in the Alboran Sea identified from altimetry and sea surface chlorophyll concencentration satellite data. Intense surface convergence of order O(f), where f is the Coriolis frequency, was measured by the drifters over a subduction region identified from underway CTD vertical sections across the front. Vertical velocities calculated along the front reveal a high temporal variability, ranging from 50 m day-1 to -100 m day-1 in less than 4 hours. Finally, the scale dependence of the horizontal divergence and vertical velocity is analyzed. During the intense subduction, vertical velocities associated with horizontal scales below 10 km exceeded those between 10 and 15 km by a factor of 2.
This study indicates that the sinking of surface waters to the inner layers occurs in small located regions with a high temporal variability. This analysis, based on in-situ observations, re-emphasizes the role played by the smaller scales (below 10~km), scales which are not resolved by the coarser grids of climate models.
Mesoscale eddies are ubiquitous features in the ocean, ranging in radius from 200km in the subtropics to about 10km in the Mediterranean sea. They strongly influence ocean heat storage and biological production, and their induced physical properties anomalies can extend to 1000m deep. Eddy detection algorithms usually give a diagnosis of dynamical parameters such as radius, rotational speed and contour, but it is then of interest to know what information on its vertical structure can be remotely predicted. In particular we aim to focus on the mixed layer depth (MLD), which is the uppermost ocean layer, sensible to atmospheric fluxes, but also key for biological production onset.
Over the past decade, progress in in situ measurements and in particular colocalization of eddy altimetric detections with Argo floats profiles revealed an insight of the eddy vertical structure trough regional studies. However this was usually done by regional average or through composites, yet recent studies revealed a broad eddy-induced variation of the mixed layer depth. These variations follow the MLD seasonal cycle, but compared to an outside-eddy background, it is consistently found to be deeper inside anticyclones and shallower inside cyclones (Sun et al, 2017, 2018).
These eddy MLD variations were also found to be dependent in time and space, but so far no clear law was inferred from observations. Gaube et al (2019) proposed a first guess of a MLD anomaly linear to eddy sea surface height amplitude, but this work was done through global eddy composites, totally blurring the vision of an individual eddy.
The Mediterranean sea is a region of particular interest for eddies as it contains also a wide variety of dynamical structures, both surface- and subsurface-intensified. Frequent oceanographic surveys launching Argo floats allow to follow very accurately MLD evolution inside some mesoscale structures compared to an accurate background filtered from interannual variability. It reveals extreme amplification, with winter MLD reaching about 100m deep outside-eddy but sometimes reaching 300m in some anticyclones, questioning the consistency of a simple linear and symmetric theory. Following the approach of individual eddy tracking, we attempt here to give a predictive law of eddy-induced MLD anomalies using remote sensing information.
Mesoscale eddies are ubiquitous structures in the ocean. As mesoscale eddies can trap and transport water within their cores over long distances, they have been investigated globally since the availability of altimetry maps. Mesoscale eddies are first detected in the sea surface, and then tracked in time and space. Several methods of detection and tracking have been developed, most of them propose to describe the evolution of mesoscale eddies with an association in individual trajectories, with a beginning and an end. Few methods are able to take into account the interactions between trajectories, and when merging and splitting events are recorded, it is necessary to change the semantics and the metrics to describe the behaviour of mesoscale eddies. Here we present a new mesoscale eddies dataset, where the structures are gathered in networks. Eddies are detected on daily absolute topography maps with the Py-Eddy-Tracker algorithm (PET, Mason et al., 2014, https://github.com/AntSimi/py-eddy-tracker). Following Pegliasco et al. (2015), successive eddies with overlapping contours are associated in the same network if the overlap ratio, defined as the intersection of their area divided by the union of their areas, is more than 5 %. Within networks, segments represent the temporal evolution of individual eddies and nodes between segments correspond to merging or splitting events. Segments are what was previously called trajectories, but the individuality (no interactions) is not assumed. During merging and splitting events, more than two eddies present an overlap. The highest overlap ratio is used to determine which segment stops in a merging event and which segment starts in a splitting event. We developed simple functions to manipulate and visualize this new type of dataset.
To assess the networks’ coherence, we use a lagrangian perspective. A coherence level is obtained by advecting for 14 days particles injected within the eddy’s contours of maximum averaged speed, both backward and forward in time, with the surface currents derived from absolute dynamic topography. At the end of the advection, the number of particles still within the eddy’s contours is normalized by the initial number of particles. A coherence level is associated to each segment and each interaction and can be used for selection by the users.
The META-Networks (Mesoscale Eddy Trajectories Atlas – Networks) can be used for any interdisciplinary research topic, for example by coupling the mesoscale eddies’ contours with in situ data, or describe the displacement of tracers along eddies’ paths, at a regional or global scale.
The reconstruction of sea surface currents is a key challenge in spatial oceanography. We recently proposed the so-called 4DVarNet algorithm, a generic end-to-end deep learning scheme for inverse problems using a variational formulation. Based on Observing System Simulation Experiment (OSSE) involving high-resolution numerical simulations in the Gulf Stream region, the preliminary applications of the 4DVarNet algorithm using an LSTM-based parametrization of the solver have shown promising results. We propose here to present the recent evolutions of the 4DVarNet framework applied to spatio-temporal interpolations of satellite-derived datasets. First, 4DVarNet embeds a variational formalism which is a natural framework for exploiting multi-tracer synergies (e.g. SSH and SST) in the reconstruction of altimetric fields benefiting from high-resolution satellite products. Using a similar OSSE configuration, we demonstrate how the use of SST may help for the identification of ocean fronts resulting in a better reconstruction of the SSH. Second, the variational formulation also enables to design optimal monitoring and sampling strategies to retrieve the best reconstruction of the submesoscale processes. A stochastic formulation may also be embedded to generate ensembles and lead to uncertainty quantification. Last, from an operational perspective, a new version of the code able to deal with datasets scaling up to an ocean basin has been recently distributed. Training the model involves a new strategy based on iterating the entire datasets in small batches. This code is open-source (https://github.com/CIA-Oceanix/4dvarnet-core) and enables its future use for both design and participation in ocean data challenges. Last, we discuss the potential application of 4DVarNet to real altimetric datasets. We compare 4DVarNet performance to state-of-the-art interpolation schemes following an Observation System Experiment (OSE) framework involving SSH observations coming from 7 along-track nadir altimeters over the whole year 2017.
Understanding the oceanic surface dispersion has important applications in ocean pollution scenarios. Of the different ocean pollution events, some of which have the most impact both on the marine environment and, on society and economy are : marine plastics, oil spills and large algal blooms (of for example Sargassum).
Understanding the ocean dynamics that affect their trajectories is vital to simulate their pathways, and thus know their sources and sinks. This can then be used to implement clean-up strategies and to better manage marine protected areas. It can also help reduce the impact of ocean pollution on the marine environment and some major economical sectors like tourism.
High frequency motions have an important impact on the surface dynamics, but high temporal resolution data is necessary to be able to take into account their effects. New datasets (like high resolution ocean general circulation models) and methodologies have allowed to obtain better representations of high frequency motions. Here, we specifically focus on the high frequency motions due to tides (like for example internal waves) and waves, as well as inertial oscillations.
Using the OceanParcels framework, we simulate surface trajectories of these three different type of particles : plastic, oil and Sargassum. We focus on three regions in the Atlantic Ocean: Açores Islands, North Atlantic and Tropical Atlantic, respectively. For the plastic simulations we look at the effect of tides by using velocity outputs from a high resolution ocean general circulation model which is a twin simulation without and with tidal-forcing. For the oil spills and Sargassum outputs we use a new surface currents product generated by combining velocity data from drifters, high-frequency winds and altimetry to reconstruct high-frequency surface current.
We find the that considering high-frequency motions is key to correctly simulate their surface trajectories, but that further work is necessary to understand the ocean dynamics at the fine-scales that can drive the variability in the Lagrangian trajectories.
Even if the potentialities of deep learning for Earth system science have still been only partially explored, many algorithms developed in the framework of Artificial Intelligence can be almost directly exploited for geo-scientific analysis. One such example is given by a specific class of Computer Vision techniques, known as super-resolution (SR), that is designed to recover high-resolution details from low-resolution images [e.g. Wang et al., 2020]. In the single image super-resolution algorithms based on deep convolutional neural networks, model architectures are optimized to identify the features of an image and learn how to recover the original features from their degraded versions. Super-resolution algorithms were conceived for application to blurred/low-resolution photographs, but attempts to employ them for Earth observation problems include, for example, the downscaling of Earth system model simulations and the downscaling of Sea Surface Temperature (SST) and wind field data.
Our objective is slightly different from simple model output or single variable downscaling, as we want to improve the retrieval of sea surface currents by combining satellite altimetry and thermal observations. Presently, global monitoring of ocean currents is in fact achieved by interpolating along-track topography data and assuming zero-order dynamical balances, but combinations of altimetric data (Absolute Dynamic Topography, ADT) and tracer field evolution (e.g. SST) have also been carried out directly exploiting statistical or simplified dynamical relations.
Here, we set up an Observing System Simulation Experiment (OSSE), using the output of an ocean general circulation numerical model to simulate both predictor and target variables. We first test some baseline super-resolution deep convolutional models including the SST, the temporal derivative of the SST, the low-resolution altimeter-like ADT and the formal interpolation error of the altimeter-like ADT data as predictor variables. Our choice of the predictor variables is clearly driven by physical considerations, building upon the role of surface water mass advection in the local evolution of the SST, but aims also to take advantage of the repetitiveness of satellite observing system geometries. As such, different spatial scales and more complex relations between the learned features can be expected with respect to single variable problems, requiring specific choices in the design of the network architecture. This led us to develop a novel multi-scale adaptive residual network combining some of the most promising evolutions provided by the individual baseline models. Finally, our model proved able to compensate for sampling/interpolation limitations by learning from the primitive equation simulations. Interestingly, the algorithm can be adapted to learn directly from future surface topography and surface currents high-resolution satellite observations. Until then, after training with OSSE data, it can already be applied to real ADT and SST observations in the test/prediction phase. In practice, learning first from primitive equation simulations and known observing system geometry, and successively testing over true observations, can also be interpreted as a mean to assimilate model physics in our data-driven reconstruction.
The enhanced ocean wind forcing product (ERA*) has been developed to address the growing demand for accurate forcing from the ocean modeling community. The ERA* is a corrected reanalysis product that optimally combines the capacities of scatterometer observations and model reanalysis stress equivalent wind (U10S) output, by means of a geolocated scatterometer-based correction (SC) applied to the ERA-Interim reanalysis product (ERAi) U10S.
ERA* is able to introduce true smaller scale signal into ERAi that corresponds to the physical processes absent or misrepresented by the latter, e.g., strong current effects (such as WBCs, highly stationary), wind effects associated with the ocean mesoscales (SST), coastal effects (land see breezes, katabatic winds), parameterization errors, and large- scale circulation effects, e.g., at the ITCZ. This is verified by comparing both products against independent HY-2A scatterometer observations (HSCAT). In particular, the ERA* reduces the vector root-mean-square difference by 10%, with respect to that of ERAi.
Additionally, ERA* and ERAi wind stress were prescribed as surface boundary conditions in two sets of initialized predictions for 2017 with the EC-EARTH global coupled climate model, and compared to a control prediction where no wind is prescribed, to study the impact of the 2017 North tropical Atlantic (NTA) warming on equatorial SST variability. Both wind-prescribed predictions considerably improve the simulation of eastern NTA and equatorial warming. Yet, the novel ERA*, with respect to ERAi, represents better the off-shore warm SST’s in the NTA and along the eastern equatorial Atlantic and the south African coast.
Considering ERA*’s potential for improving ocean forcing, and in the frame of the European Space Agency (ESA) World Ocean Circulation project (WOC), an improved ocean forcing product has been developed. With the availability of the fifth ECMWF re-analysis (ERA5), which contains similar error characteristics (although of smaller amplitude) as those found in ERAi, a new ERA*, generated from the ERA5 reanalysis, is developed and generated for the period 2010-2020. An optimized configuration of ERA* in terms of the varying scatterometer constellation over the period of interest is sought, and a comprehensive validation of ERA* against independent scatterometer and buoy winds is carried out. The new ERA* product, which significantly outperforms ERA5, will be freely available from the WOC webpage (https://www.worldoceancirculation.org/) at the time of the conference.
Complementary to ocean state estimate provided by modelling/assimilation systems, a multi observation-based approach is available through the Multi OBservations (MOB) Thematic Assembly Center (TAC) of the Copernicus Marine Service (CMEMS).
CMEMS MOB TAC proposes qualified global ocean products based on satellite and in situ observations and data fusion techniques. 3 products are dedicated to surface and upper ocean currents. Satellite observations (gravity from GOCE, altimetry, SST and SSS from SMOS), in situ observations (Argo floats and surface drifters), and ECMWF wind stress are used to generate 3D geostrophic currents and 2D geostrophic + Ekman currents at the surface and 15m. One product, based on a quasi-geostrophic diagnostic model, also provides a 3D reconstruction of oceanic currents down to 1500 m, including ageostrophic components and in particular the vertical velocities. Products are available in Near-Real-Time or as Multi-Year Products.
Methods and products are presented. The performances of 3D and 2D currents are estimated thought comparison with independent data such as SVP drifters, Argo drifts at 1000m (YoMaHa database) and ADCP section at the equator.
The global mean dynamic topography (MDT) is a central component in numerous investigations concerning the system Earth. A precise knowledge of the MDT is crucial for deriving accurate assessments of, e.g., oceanographic and climatologic processes which themselves affect various other Earth system sciences. Especially in the longer wavelengths, the geodetic method of deriving the MDT, by subtracting a geoid from a mean sea surface, has proved to yield precise results. Due to the advancements in the scope of combined high-resolution global gravity field modelling and the availability of enhanced global mean sea surfaces, geodetic MDTs become more and more accurate. Particularly through the processing strategies applied in the latest XGM gravity field models (such as XGM2019e), improved MDTs can be derived which maximize the achievable resolution and minimize the induced noise. This is achieved (1) by the direct incorporation of a mean sea surface dataset into the high-resolution ground gravity field dataset, and (2) by a statistically optimal combination with an independent satellite-only gravity field model. Since, over the oceans, the final XGM model exhibits the same resolution (up to ~4km) as the incorporated mean sea surface, the obtained geodetic MDT (from XGM and the corresponding mean sea surface) does not require any further treatments (such as filtering) and can directly be used. For XGM2019e, GOCO06s has been chosen as independent satellite-only gravity field model, which, in the important medium to shorter-wavelengths, is mainly based on data from ESA’s GOCE mission. The incorporated mean sea surface corresponds to DTU13 MSS, which is based on data collected from several satellite altimetry missions over two decades (TOPEX, Jason, ERS, Envisat, ICESat, Geosat, GFO, Cryosat). The combination of the high-resolution ground gravity dataset (which includes the DTU13 MSS) with the GOCO06s model is performed based on fully occupied normal equation systems, established up to a spheroidal harmonic d/o of 719 (~15km resolution). Above d/o 719 and up to d/o 5400 (~4km resolution), XGM2019e consists of a spheroidal harmonic block-diagonal solution of the high-resolution ground gravity dataset. With this strategy one achieves a maximal compatibility between DTU13 MSS and XGM2019e. The mean sea surface is interpreted as gravity driven as long as it does not explicitly contradict the satellite gravity field model. The differences between the DTU13 MSS and GOCO06s then represent the resulting MDT. In this contribution the aforementioned estimation procedure is summarized and the resulting MDT is discussed, specifically regarding the impact of the XGM2019e gravity field model.
Surface drift velocities and associated Lagrangian transport of substances (pollution, debris, sediments. etc) are generally formed by the joint impact of general circulation (including mesoscale features), turbulent processes at the sea surface, and wind and waves. This study explores the contributors of Lagrangian transport in the Gulf of Finland, Baltic Sea by using a synergy of satellite derived sea surface temperature (SST) obtained from EOS, Moderate Resolution Imaging Spectrometer, high resolution bathymetry data and in-situ (surface drifters, wave gauges, meteorological and hydrological measurements) data sources. These experiments comprise of one of the longest collection of in-situ surface drifters, deployed at different seasons for the period of 2011–2018, where 38 drifters deployed in batches of three were designed to capture the surface drift in the uppermost 2 m layer of the sea.
Statistical and qualitative analysis reveal that the classic Ekman-type drift of the surface layer to be present 68% of the occurrences, with an express estimate of the surface current speed as 1.5% of the wind speed. It was also calculated that for wave height >0.5 m accounting for higher velocities. Also, for about 7–14% of the occurrences the surface drift is governed by other processes (e.g. eddies, fronts, upwellings) than direct wind and wave impact. For instance, depending on the stratification and bathymetry the presence of coastal upwellings, that were identified from satellite based SST, frequently occur during the summer months on time scale of 6‒12 days. These upwelling events do not always take the classic longshore form but instead the form of cross-shore jets, with the jets superseded the classic Ekman transport, in that they slowed down the average speed of surface currents in the region affected by the upwelled cold water jet and its filaments. Different stages of the upwelling process were observed from satellite imagery: (i) the first stage the cooler water is brought to the surface; (ii) the second stage is characterised by the presence of coherent cooler water cross-shore jets and filaments with low intensity of mixing and (iii) the third stage demonstrates the presence of the jets degrade into filaments and mesoscale features within 1–2 days. It is in this third phase that most intense mixing of upwelled and surrounding waters occurs and surprisingly under weaker winds. Further examination reveals that the slope of the bathymetry (>0.0075) playing an import role in the presence of the cross-shore jets. Also, that the cooler water most likely origins from intermediate water masses at depths between 15‒30 m where phosphates nutrients tend to be prevalent and this nutrient when in excess in the study area is often linked to the onset of intense cyanobacteria blooms. Thus the surface transport are linked to a specific combinations of wind- and wave-induced drift under moderate and strong winds but also underlying synoptic- and basin-scale circulation patterns.
The first charts showing information on surface oceanic currents were based on logs of both military and merchant ships [Richardson et al, 1987]. At the time, ships navigated through dead reckoning. Once or twice a day, a ship noted its position based on celestial navigation, and recorded its speed, and compass direction. If currents are present, they will likely push the ship off course and alter its speed. One can estimate the direction and speed of these currents by subtracting the predicted vector based on dead reckoning from the vector representing the ships actual speed and direction. Nowadays, merchant ships transmit their position, compass direction and speed through AIS (Automatic Identification System) messages. Therefore, it is now possible to compute surface current at finer scales. eOdyn works clearly show that this approach is able to reveal sub-mesoscale features like eddies or filaments that can pass by satellite altimetry techniques [Guichoux et al, 2016]. The potential of the method to derive precise surface current from AIS messages, so-called Omni-Situ technology (OS – e.g In-situ measurements on all oceans ; the ship act as an in-situ sensor of opportunity) was also demonstrated in the Agulhas current region [Le Goff et al, 2021]. One of the main characteristics of AIS derived surface currents is that they do not require the deployment of a dedicated infrastructure to collect maritime traffic information. AIS data used to produce ocean surface current can be collected thanks to coastal receivers and low-orbit satellites already deployed for maritime security purposes. AIS transmitters are mandatory equipment and more than 200 000 fitted ships are already navigating the world oceans. However, it is also clear that current infrastructures deployed for maritime security purpose have not been designed to provide information on surface currents. There is a gap between requirements from users involved in maritime surveillance and users interested in ocean surface current measurements. While “historical AIS end-users” are using the system as a tracking system to learn about ship positions and identity, we are using it as an opportunistic data collection system to produce information on surface currents. In many cases, producing ocean surface currents thanks to AIS data sets requires to get more AIS data than available using existing infrastructures. This is particularly true in open ocean or in AIS shore base stations out-of-range areas. These areas require to use AIS data collected using low earth orbit satellites. Such satellites are missing a lot of AIS ship transmitted messages, due to messages collision issues.
Recent results related to the use of AIS data to produce surface currents illustrate the synergies with altimetry measurements. Both techniques could be used in routine in a synergetic approach to monitor surface currents on a global scale, taking advantages of AIS derived surface currents to reveal small oceanic features such as sub-mesocale eddies.
This work reports on an upwelling episode on Western Iberia coast, in September 2019.
Satellite observations are used together with hydrodynamic model results to study the inner-shelf upwelling dynamics off the NW Portuguese coast during summer. It is shown that the spatio-temporal variability of the maximum coastal divergence can only be properly assessed using high resolution (< 1km) products. Numerical model results and an extraordinary cloud-free sequence of Sentinel-3 SLSTR (Sea and Land Surface Temperature Radiometer) and OLCI (Ocean and Land Colour Instrument) images for the first half of the month, show that maximum divergence occurs along a narrow (~3km) band located 5km to 25km away from the coastline. This coastal divergence strip, forced by fluctuating upwelling-favourable winds, is clearly visible in the full resolution (300m) chlorophyll-a concentration (Chl-a) map as a very low Chl-a band about 10km from the coast, between the 30 and 50m isobath, on the day following the peak wind stress (> 0.2 Pa). The offshore distance of the maximum divergence strip increases with the wind forcing along the linear coastline stretch, contrary to the region where the coastline is indented due to presence of a cape, where the divergence is high and relatively fixed at about 10km from the coast throughout the upwelling event. The temporal sequence of the cross-shore sea surface temperature (SST) and Chl-a transects show that the expected relationship between low SST, taken as a proxy for recently upwelled, nutrient-rich waters, and high Chl-a is only valid in the near coast region for moderate/low wind stress. The results for the linear coastline area agree remarkably well with previous modeling studies on the inner-shelf response to upwelling-favorable winds, and provide an unprecedented observational evidence that the cross-shelf transport in this nearshore region is very weak in strong contrast to the areas further offshore. This study shows that with the synergistic use of in situ data and high-resolution satellite remote sensing, and numerical modelling it is possible to describe and explain the phytoplankton response to an upwelling episode in the inner-shelf at daily time scales.
Waves break at the ocean’s surface at high or medium wind speeds or in the absence of wind due to shoaling of the seafloor. However, surface waves also break due to interactions with surface currents from various origins, including internal solitary waves (ISWs). In the open ocean and in the presence of large amplitude ISWs wave breaking limits the height of surface waves, mixes the ocean surface, and enhances air-sea fluxes of heat, mass and momentum through the generation of turbulence, the entrainment of air and the creation of spray and aerosols. In this paper, we address surface wave breaking caused by ISWs and how ISWs are manifested in synthetic aperture radar (SAR) altimeter (SRAL) onboard the Sentinel-3A and 3B satellites. We study, for the first time with advanced SAR altimetry, the wave breaking caused by large amplitude and nonlinear ISWs and their effects in Significant Wave Height (SWH). Internal Waves play an important role in determining the near-surface sea temperature structure and the air–sea exchange processes, being therefore important for understanding the evolution of the climate system. In the presence of strong ISW-Surface Wave interaction, breaking surface waves are known to occur and hence, it is expected that wave energy dissipates and the wave energy spectrum is altered.
Two different regions of the ocean are selected, namely the tropical Atlantic Ocean off the Amazon shelf and the Banda Sea in the Indian Ocean, where there are scenes of Sentinel-3 OLCI (Ocean Land Colour Instrument) acquired simultaneously with along-track SAR mode altimeter, which included signatures of large amplitude ISWs. New data of unfocused SAR (UF-SAR) and fully-focused SAR (FF-SAR) modes is analysed. In addition, processing in full and reduced bins modes is explored.
Firstly, at smaller scales (1-3 km) it has been observed a strong decrease in normalized radar cross section (NRCS) over the rough part of the ISWs, and a small increase in the smooth part relatively to the unperturbed ocean background (Santos-Ferreira et al., 2018). Secondly, a less obvious and gentle effect is noticed: that the Significant Wave Height (SWH) parameter is significantly attenuated, after the passage of an ISW, considering length scales of about 10 km before and after the ISW crest (i.e. in 20 km length scales). The SWH signatures are unique in showing that the surface wave energy does not return to its unperturbed level after an ISW passes, most likely because intense meter-scale wave breaking results in surface wave energy dissipation. It is suggested that the cause of this SWH attenuation is related to the wave breaking associated with the ISWs, characterized by surface wave energy dissipation, turbulence effects and air emulsion.
Furthermore, Sentinel-2 images are analysed and provide insights admittedly into this same phenomenon: white-capping of two different kinds are reported, the first being a traditional radiance increase at all (visible) wavelengths extended in time scales of tens of seconds, and a second kind associated to quick transient “flashes” of enhanced radiance depicted in different coloured pixels in RGB composite images, with typical time scales of one second or less. Fraction of modulation of breaking waves in the presence of internal waves are presented.
Geophysical parameters are estimated from Sentinel-3 FFSAR 160-Hz and UFSAR 20-Hz waveforms, using a common retracking algorithm. An improved retracking model is introduced by means of a mean square slope (mss) parameter, to account for the dependency of sigma0 with respect to the viewing angle, using the geometrical optics theory as it is done in some published works (see Dinardo et al., 2021; Tourain et al., 2021). Such an adaptive retracking algorithm that additionally estimates a constant mss has been developed and applied in this study. Taking into account the peculiarities of the ISW features (propagation direction, etc) and altimeter observation geometry, we further propose a bin-reduced range method to reduce the range window used in the retracking, i.e. estimate the parameters only on a portion of the waveform around the leading edge. The altimeter footprint size is in turn reduced in the cross-track direction, making the constant mss assumption more likely to be valid, at the expense, however, of an increase of the measurement noise. Hence, two different approaches were considered: one using the full range of the waveforms as it is classically performed in operational SAR-mode processing and the other one by retracking the waveforms over a reduced range of bins (more precisely a truncation is carried out dynamically ten gates away from the estimated epoch position), called 10 bins processing. The consistency of the results between the different processing methods is discussed, i.e. both UFSAR and FFSAR (in full and reduced bins mode), particularly with respect to the impact of ISWs signatures in SWH and sigma0.
The TRISHNA mission (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment) is a cooperation between the French (CNES) and Indian (ISRO) space agencies. It will measure the optical and thermal spectra emitted and reflected by the Earth from a low-altitude Sun synchronous orbit, over a swath with a width of 1026 km. It is intended to measure approximately twice a week the thermal infrared signal of the surface-atmosphere system at 57 m resolution for the continents and the coastal ocean, and a resolution of 1000 meters over deep ocean. The spectral domain covers 11 bands from visible to thermal infrared, which are distributed over two instruments: the VNIR/SWIR Indian instrument and the TIR French instrument. The 4 thermal infrared bands of the TIR instrument are used for emissivity and surface temperature retrieval.
A high radiometric performance is required for the TIR instrument, with a radiometric noise better than 0.2K in the most demanding bands, and an absolute radiometric calibration accuracy better than 0.5K. The targeted launch date for TRISHNA satellite is 2025, being then positioned as a precursor of the LSTM Copernicus mission from ESA. TRISHNA is designed for a lifetime of 5 years. The primary scientific objectives of the mission will be to provide high-quality imagery of vegetation, snow, ice and sea surface temperature and albedo. In coastal areas, the deep interactions between the ocean, the atmosphere and the land generate a strong variability in the surface temperature at very fine scales. It is therefore interesting to measure the temperature of the water at the surface with high spatial and temporal precision, as this information can have several uses. Thermal imaging and optical data with high spatial resolution and frequent observation, will bring key information on sea surface temperatures, sub-mesoscale activity in coastal areas and in the high seas, continental waters (lakes and rivers) as well as oil spills, weather forecast, thermal pollutants, effluents and wastewater discharges.
The Antarctic Ice Sheet is the largest source of potential sea level rise, containing enough water to raise global sea-levels by 58 meters. In Antarctica, ice mass loss is almost entirely (98.9%) driven by ice dynamics (changes in ice flow, calving of icebergs, and melting at the ice-ocean interface), in comparison to the Greenland Ice Sheet where extreme surface melt events dominate the ice loss signal (Slater et al., 2020). Ice dynamics are primarily driven by oceanographic rather than atmospheric processes, making it critically important that the links between Antarctic ice and the South polar ocean are studied. High interannual variability in the observed mass change for the Antarctic dynamic component (0.46 0.16 mm/yr) is not currently reproduced in AR5 and may not represent the longer-term mass imbalance (Slater et al., 2020).
Floating ice shelves fringe 74% of Antarctica's coastline, providing a direct link between the ice sheet and the surrounding oceans. Ice shelves are important because (i) they have retreated and thinned in key parts of the continent during a period of environmental change (Shepherd et al., 2003) (Paolo et al., 2015), (ii) their buttressing effect is known to modulate the grounded ice sheet contribution to global sea level rise (Rignot et al., 2004, Reese et al., 2018), (iii) ice melting causes ocean freshening which in turn influences patterns of ocean circulation, and (iv) changes in their mass contribute a modest amount to the rate of sea level rise due to steric effects (Shepherd et al., 2010). Over the past three decades, satellites have observed the retreat (Scambos et al., 2009) (Cook and Vaughan, 2010), thinning (Paolo et al., 2015) and disintegration (Rott et al., 1996) of Antarctic ice shelves. While these examples demonstrate that ice shelves can respond to change over short timescales, long data records are required to disentangle natural variability from longer term more permanent change.
Despite the critical role that the ice sheet and ice shelves play in the Antarctic glaciological system, significant gaps remain in our understanding of the impact of the Southern Ocean as a driver of both short and long-term environmental change. In this study we present an overview of results from the ESA Polar+ Ice Shelves and ESA SO-ICE projects, showing a decade of observations of recent change in Antarctic ice shelf behaviour and the role that ocean heat and circulation may have in driving this. We generate timeseries of ice shelf thickness change and basal melt from 2010 to 2021 and combine this with observations of ice shelf area change over the same period. We present data from a case study of the Weddell Sea region, where the latest high resolution ocean models and in situ observations are used to generate new measurements of oceanographic change. Combined, ESA projects such as Polar+ Ice Shelves and SO-ICE enable us to deliver a step change in the spatial resolution and temporal frequency with which ice-ocean interactions are understood. In the future this will improve our knowledge of ice-ocean interactions and their impact on the sensitive Antarctic system.
Fronts within the ocean surface mixed layer have been identified as agents for enhanced communication between deeper ocean waters and the atmosphere. Given the importance of such exchanges for physics and biogeochemistry in the ocean and atmosphere, obtaining a better understanding of the dynamics of such phenomena appears paramount.
At horizontal scales smaller than 10 km and commonly referred to as “submesoscale,” frontal shears are elevated and stratification reduced to an extent that gradient Rossby and Richardson numbers have values which approach unity. The end result is that frontal instabilities are more common, including forced and unforced gravitational, inertial (i.e., relative vorticity equal in magnitude but opposite in sign to the Coriolis parameter) and symmetric instabilities, as well as a mixed layer analogue of the classical baroclinic instability first described by Charney, Eady, and Stone. Since frontal instabilities are believed to impact energy, buoyancy, and tracer budgets in the upper ocean, dynamics which enhance or reduce these phenomena merit some attention.
It has recently been recognized that the centrifugal accelerations experienced by fluid parcels within ocean fronts—or frontal curvature—may play an important role in determining the stability of fluid parcels. Modification of the vertical shear introduces a tilting of the absolute vorticity vector which tends to stabilize anticyclonic and de-stabilize cyclonic curved fronts. This also applies locally to dynamics found within eddies or vortices. Moreover, Earth’s rotation imparts angular momentum to fluid parcels and this too modifies their stability. Because departures from geostrophic balance owing to centrifugal forces are more common for submesoscale flows than for larger-scale flows, it follows that centrifugal accelerations merit attention.
In this presentation, we summarize early efforts to characterize frontal curvature north of the Gulf Stream in the North Atlantic, made as part of a larger effort to better understand how centrifugal forces might affect parcel motion within ocean fronts. We focus on realistic numerical simulations that, roughly speaking, permit mixed layer baroclinic instability and therefore resolve eddy scales of interest. Questions posed and explored include the following: where do we expect submesoscale eddying flows to be most prevalent, can we distinguish between eddies and fronts, do observed radii match the mixed layer deformation radius and, if not, are we saying something about the inverse energy cascade? We use this analysis to motivate a future characterization of such flows from satellite measurements (e.g., SEASTAR) together with in situ observations at these locations.
Satellites are important tools for monitoring the ocean environment on a global scale. Supplementing traditional in-situ observations can effectively be accomplished by DNA technologies. In this paper, we consider an example involving surfactant associated bacteria and synthetic aperture radar imagery. Remarkably, certain types of bacteria are associated with natural and anthropogenic surfactants (like oil spills). Surfactants suppress short gravity-capillary waves and contribute to the formation of slick-type features on the sea surface. Slicks are seen as dark spots in synthetic aperture radar (SAR) satellite imagery. We have analyzed the abundance of surfactant associated bacteria on the sea surface and below the sea surface to understand their role in slick formation under different environmental conditions. The experiments were conducted in the framework of the Consortium for Advanced Research on Transport of Hydrocarbon in the Environment studies funded by the Gulf of Mexico Research Initiative in April 2017 in the Gulf of Mexico, some near a known oil seep from a damaged oil platform (Taylor Energy), and in July-August 2018 in the Straits of Florida (Looe Key) near a coral reef. The in-situ measurements were coordinated with SAR satellite imagery (TerraSAR-X, RADARSAT-2, and Sentinel-1). In-situ samples were taken from a small boat. Hydrophilic polycarbonate filters were used for sampling a 40-μm sea surface layer outside the area disturbed by the boat and boat wake. A peristaltic pump was used to collect subsurface water from approximately a 0.2 m depth. Identification of surfactant associated bacteria was done using DNA technology. For this purpose, the surface and subsurface samples were placed into the sterile MoBio bead tubes that were later used for DNA extraction. The samples were stored on dry ice in the field and then transferred to a -80˚C freezer until DNA extraction. Subsequent DNA extraction was performed in a sterile lab environment. The extracted DNA of each sample were sent to Argonne National Laboratory (ANL) following standard protocol to be amplified and sequenced on the Illumina MiSeq platform. The samples that were accidentally contaminated or were interfered with during field collection (i.e., touching an arm or boat superstructure, folding, etc.) were discarded. To account for possible contamination by airborne bacteria during transportation from the sea surface to the MoBio bead tube, an air control sample was taken at each site. For this purpose, a filter was exposed to the air for approximately 10 s with sterile forceps, then placed directly into the MoBio bead tube. Also, non-exposed control filters, which were only removed from their sterile containers during DNA extraction, served to estimate possible laboratory contamination. Control filters were sent to ANL together with the ocean samples. Due to the natural variability of bacterial communities and sea slicks in time and space, ten successful surface and ten successful subsurface samples were taken at each station and averaged in the subsequent data analysis. Illumina MiSeq DNA sequencing identified thousands of genera of bacteria including twelve surfactant associated bacteria. Surfactant and oil associated bacteria were abundant in the oil-slick and low wind speed areas observable in SAR. Surfactant associated bacteria were typically equally abundant on the sea surface and in the subsurface water during nighttime but were found in reduced abundance on the surface relative to the subsurface water during daytime, which was presumably due to the effect of ultraviolet radiation. The results of this case study indicate that approaches involving satellite oceanography and DNA methods may contribute to the monitoring of the ocean environment and biophysical interactions in the ocean on a global scale.
To resolve fine scale environmental forces relevant for marine ecosystem function, new satellite derived products are required, especially along the coasts of the world. In this context, the Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) from NASA provides sea surface temperature imagery with an unprecedented 70×70m pixel size. In this study we compared this new product with imagery from VIIRS, a well-known and commonly used product in oceanographic studies worldwide. Specifically, along the coasts of the main upwelling systems of the world off Chile, South Africa, California and Iberia we analysed SST gradients across conspicuous thermal structures with ECOSTRESS scenes and with quasi-simultaneous (less than 90 minutes apart) NOAA-20 VIIRS images with a pixel size of 750×750 m. We found that ECOSTRESS successfully quantifies sub-pixel scale physical structures like upwelling shadows, fronts and filaments that are not properly resolved with NOAA-20 VIIRS. Furthermore, internal wave-like filaments with different orientations within large eddies were visible off the NW coast of Iberia only with ECOSTRESS, while VIIRS imagery failed to capture these previously unknown features. In general, frontal location, gradient magnitude and sharpness are much better defined with the use of ECOSTRESS. Thus, the novel imagery from ECOSTRESS provides an important complement to the operational suite of products for the study of fine-scale ocean dynamics and, especially for the characterization of frontal areas with enhanced biological activity. Moreover, it may exceed even the capacities of the new generation of instruments like the Thermal Infrared Imaging Satellite for High Resolution Natural Resource Assessment (TRISHNA) to be launched by the end of 2024. This sensor will provide a similar spatial resolution with a 57-90 m pixel size, but only for the land and coastal regions, while a much coarser 1x1 km pixel will be applied for the deep ocean (>100 km offshore). As upwelling-related thermal features like the ones we have recorded off NW Spain, California and South Africa often lie more than 100 km offshore, TRISHNA may not be able to fully resolve their structure on the spatial scales provided by ECOSTRESS. Therefore, it is important to consider the new technical improvements introduced by ECOSTRESS when planning future missions for the monitoring of ocean surface dynamics.
Deep convection in the Southern Ocean is a key component of the global overturning circulation, drawing excess anthropogenic heat and carbon out of the atmosphere and ventilating the abyssal ocean. Antarctic Bottom Water (AABW) is formed as ocean-ice-atmosphere interactions in the Southern Ocean cause cold, saline water around Antarctica to sink and descend to the deep ocean along the sloping isopycnals leading off the continental shelf. This can occur as ‘plumes’, which cascade down the shelf, or through open ocean deep convection.
Despite their important role in regulating the climate, these processes are not yet well understood. The small temporal and spatial scales over which they occur limit the usefulness of existing observations, which are sparse and infrequent. Additionally, results from the CMIP6 ensemble show a large variability in predictions of AABW formation between models and little agreement in closely related processes, such as stratification, mixed layer depth and the occurrence of polynyas. More widespread and frequent observations are required to gain a better understanding of the underlying processes and provide a baseline for model validation.
Satellite products of increasing quality are becoming more readily available and offer a promising method for capturing the extent and variability of deep-water formation in polar regions. Sites of AABW formation are characterised in the water column by increased density, which can be observed at the surface as a lower steric height.
We present the steric height anomaly as a proxy for deep water formation in the Southern Ocean south of 60°S, calculated using the sea surface height from a combination of satellite altimetry products. We consider the DOTA database, consisting of CRYOSAT2 and ENVISAT measurements, the SSALTO/DUACS experimental dataset of gridded sea surface heights and the IceSat-2 ALT21 Sea Surface Height Anomaly. The steric height is then calculated by accounting for eustatic height and mean sea level anomaly, from GRACE and ERA5 respectively. Sites of deep-water formation and changes in AABW production over time are inferred from the steric height and compared to available in-situ observations.
Shinyang Bay, located on the eastern coast of Jeju Island, Korea, is semi-closed bay. Shinyang Bay has a wide sandy beach along the north to east coast, and has a shallow water depth that is greatly affected by tidal changes. Submarine groundwater is discharged from Shinyang Port and land-based aquaculture farms that started commercial flatfish farming for the first time in Korea are located along the west coast. The green-tide outbreak began after the mid-90s and has continued to occur until now. The purpose of this study is to analyze the cause of the green-macroalgae bloom using big-data. The hydrographic observation conducted monthly CTD observation and sample analysis (nutrient, POC, seaweed) on 9 points in the bay and 12 coastal points, and measured ocean currents using small-GPS drift from 2019 to 2021. The occurrence area of green-tide was calculated to use the drone, aerial photo, Sentinel images. Each image was performed during monthly low tide, and individual images were synthesized and masked out, and classified object. A numerical model simulated the circulation in the bay. The occurrence area of green-tide was the lowest in March and the highest in June. During 2020, the cumulative occurrence area of green-tide is about 2 million m2. The continuous occurrence area of green-tide is the eastern and western coast and the middle part of the bay. First reason is the rising of the sea surface temperature during winter. Second is changing the circulation pattern due to the artificial structure in the bay. It causes the accumulating seaweed on the beach and investigates that the supply of nutrients for growth discharges from aquaculture farms and submarine ground water. The results of the numercal model and drift measurement showed that the port construction at the western entrance in the late 1990s increased the duration time compared to before construction and increased the number of particles flowing into the beach rather than into the outside.
Southern Ocean Freshwater (SO FRESH) is a recent ESA funded project (2021-2023) included in the Polar Cluster Initiative. Polar Cluster aims at establishing collaboration with the existing projects in polar areas to maximize unique, added-value capabilities from ESA missions and remote sensing missions in general. In that sense, Polar Cluster intends to fill knowledge gaps in specific hot topics of polar research. SO FRESH main goal is to improve our understanding of the different processes governed or affected by freshwater fluxes in the Southern Ocean.
SO FRESH scientific objectives are based on four specific case studies: i) to improve our understanding on the changes in Antarctic Sea Ice; ii) to characterize the drivers of the formation of the Weddell Polynya in 2016-2017; iii) to assess salinity changes in the Antarctic coastal region and elucidate their causes/consequences; and iv) to analyze the formation of deep water via remote sensing variables.
A key aspect to SO FRESH is the availability of continuous series of accurate geophysical variables with a space-time resolution adequate for the case studies. The key ocean variable for the four case studies is Sea Surface Salinity (SSS). SSS can be used in combination with other ocean variables (i.e. Sea Surface Temperature, Sea Surface Height Anomalies) to enhance the state of the art of SO freshwater fluxes, Sea Surface Density variability and Water Mass Transformation Rates. In that regards, special effort will be put in applying all recent advancements in SSS processing in the context of ESA SMOS Mission, in order to enhance its quality: nodal sampling, PSF correction, TB fusion, enhanced Debiased non-Bayesian retrieval, etc. Specific quality metrics, developed for SO FRESH, will be used for the data quality control and validation.
SO FRESH started in May 2021, and the first set of data is expected to be available for distribution by the beginning of 2022. In this talk, we will present the results obtained during the first months of the project.
The Mediterranean Sea is a mid-latitude, semi-enclosed sea where evaporation exceeds precipitation and river runoffs due to a dry, windy, and relatively warm regional climate. The pressure unbalance is compensated by the strong inflow of Atlantic Waters (AW) through the Gibraltar Strait. The AW flows cyclonically around all the Mediterranean and is continuously modified along its path by evaporation and mixing with resident water. When the AW present at the southern part of the Western Mediterranean encounters the saltier modified Atlantic Water (mAW), mostly present at the northern part, sharp surface density fronts are created that are readily identified from satellite imagery and can be observed year-round. These density fronts become unstable and generate an intense ageostrophic secondary circulation associated with intense surface convergence and strong vertical velocity. These regions are key in the exchange of water properties, including heat, carbon and oxygen, between the ocean interior and the surface.
Here, we use satellite observations from different sensors (sea surface temperature (SST), sea surface chlorophyll concentration (CHL) and altimeters) to identify surface frontal regions. The high spatial resolution of the SST (0.02° × 0.02°) and CHL (1 km x 1 km) products, allows us to identify even the smaller frontal filaments associated with submesoscale horizontal structures below 10 km. Gradient magnitudes are calculated using centered differences, then the pixel location is selected as front if the value at that location exceeds some threshold value. We use cumulative histograms of the gradient values to identify the small percentage of pixels with the highest gradient values. This method avoids spurious front detections and the results are less contaminated than using a fixed threshold.
Thanks to the continuous time series of daily data provided by Copernicus Marine Services, we analyze the seasonal and annual variability of the frontal regions over the period 2016-2020 focussing on the Western Mediterranean Sea.
Intermittent wind-driven coastal upwelling and downwelling are ubiquitous processes that drive a large part of the high frequency variability of coastal hydrography, with potential implications for ecosystems and socio-economic activities. However little synoptic information exists on these processes, especially in regions characterized by rapidly changing atmospheric forcing and complex shorelines. In addition, recent work suggested that remotely sensed L4 Sea Surface Temperature largely underestimates the thermal fingerprint of coastal upwelling due to artificial flagging procedure.
Combining multi-annual hourly in-situ observations of nearshore temperatures with a long-term archive of Sea Surface Winds (SSW from ERA5 reanalysis), we investigate the statistical occurrence of wind-driven upwelling and downwelling events and their associated thermal responses along the northwestern Mediterranean coastlines. After validating the gridded SSW product with in-situ wind measurements, a Wind-based Upwelling and Downwelling Index (WUDI) is calculated at 20 km spatial resolution and validated against time-series of surface and subsurface in-situ temperatures at 11 coastal locations.
We find that the WUDI index allows monitoring robustly all year round both up- and downwelling events that effectively cause coastal cooling/warming. On average, significant thermal responses to favorable winds appear after short delays (spanning 6-54h for upwelling, 12-66h for downwelling, depending on the site considered) with intensities 5 to 10 times stronger in stratified as compared to nonstratified conditions. Maximum near-surface cooling (subsurface warming, respectively) recorded after the most extreme events can reach up to -12°C (+9.5°C, respectively) during the period of seasonal stratification.
A climatological database of wind-driven events that can be associated with typical thermal responses is constructed for the Northwestern Mediterranean shorelines over the last four decades. It shows that up/downwelling events are favored along certain portions of coastline, called "cells", and are characterized by specific magnitudes, frequencies of occurrence and durations with respect to seasonality. We demonstrate that shorelines ranging from 4.0°E to 6.2°E are dominated by wind-driven upwelling, while shorelines ranging from 3.0°E to 3.5°E are dominated by wind-driven downwelling. Furthermore, it reveals previously overlooked cells, such as around Fréjus/Cannes and Livorno/Piombino for upwelling and near Albengua for downwelling, which are however activated about 2-3 times less frequently than the prominent cells in the Gulf of Lion. Despite differential responses, these wind-driven events are more frequent during winter-spring than during summer-autumn: for both upwelling and downwelling, the mean occurrences at the most active cells are 11 days per month in winter-spring compared to 8 days per month in summer-autumn. While the main upwelling (resp. downwelling) events are generated by the prevailing northwesterlies (resp. easterlies), both winds also force the opposite process depending on the shoreline orientation and small changes in wind direction. More generally, our conclusions suggest that future integrated analyses of the next generation SSW and SST products, along with high-frequency in-situ observations, show great promises in deciphering the full variability of the coastal ocean.
1. INTRODUCTION
CFOSAT is a mission developed under the responsibilities of the French and Chinese Space agencies (CNES and CNSA). This new spaceborne system was launched on October, 29th 2018, with the main objective of monitoring at the global scale, ocean surface winds and waves. The satellite embarks two radar scatterometters both scanning in azimuth: SCAT, a fan-beam wind scatterometer [2], and SWIM, a wave scatterometer [3]. With its collocated measurements of ocean surface wind and waves, CFOSAT is a great source of information for ocean surface processes and ocean/atmosphere interactions, allowing improvements in atmospheric and oceanographic models and predictions. But it opens also a wide field of applications for sea ice monitoring.
SWIM is an innovative Ku-band real-aperture wave scatterometer, with 6 low-incidence rotating beams [3].
Its design allows the systematic production of directional spectra of ocean waves with a real-aperture radar system, and associated performances have been demonstrated [5]. This design also offers opportunities over ice regions thanks to its original viewing angle configuration. SWIM complements for the very first time other existing concepts such as altimetry, scatterometry or radiometry. This complementarity at instrument and product level is discussed in section 2. A first exploitation of the SWIM potential over sea-ice is presented in section 3, with a sea-ice detection algorithm based on a Bayesian approach [7], which shows promising results and opens perspectives for sea-ice characterization.
2. SWIM design and products assets for sea ice study
SWIM instrument measured the normalized radar cross section from its 6 incidence beams: one at nadir and five low incidence beams :2, 4, 6, 8 and 10°, covering all azimuth angle thanks to the rotating antenna [1]. Looking at nadir measurements, sea-ice reflects more energy than open surfaces water, implying peaky waveforms. These echoes acquired on sea-ice are not well processed by usual classical so called retracking processing based on Brown model [4]. In the CFOSAT SWIM ground segment, the “Adaptive retracking” has been implemented [6], this retracking, in addition to geophysical parameter restitution noise reduction, is well adapted to peaky echoes processing thanks to a model accounting for surface characteristics through a so called pseudo mean square slope parameter. This improved processing provides a nadir normalized radar backscatter 0 showing a qualitative high consistency with the sea-ice extent, it also exhibits variations consistent with the ice type (ice age) [6]. SWIM products though provides nadir information useful for sea-ice studies.
In complement to that, the SWIM off nadir geometry shows an original viewing angle configuration. SWIM binds altimeter measurements at nadir with radiometer and scatterometer measurements at large incidences. To our knowledge, apart from preliminary studies on GPM data, sea-ice has never been observed in near-nadir Ku band. At low incidences, ice induces a change in the radar backscattering, in its level and in the shape of NRCS profiles with incidence. At nadir, sea-ice reflects more energy than open water surfaces, but this tendency switches at more slightly off-nadir angles. Especially, the radar of sea-ice decreases very fast with incidence angle [8].
Most mechanisms building up the sea-ice backscatter are known [9], however, the variability of sea ice physical characteristics make it complicated to summarize the NRCS behavior into a simple parametric model, contrary to the open water case. Thus, the SWIM NRCS profiles provides a potential of interesting analysis for characterizing the ice itself (in particular ice age).
3. Sea-ice detection algorithm from echoes of SWIM
Sea-ice is quite favorably discriminated from open water in Ku-band, not only due to its lower backscatter, but also because sea-ice exhibits a convex radar response with incidence, while it is concave over open water.
The sea-ice detection algorithm presented in [7] is based on the estimation of a sea-ice likelihood, given the value of an NRCS and ancillary meteorological data, that are easily available in SWIM processing chain, especially the surface wind speed and the sea-surface temperature.
The Bayesian prior is quite flexible. In the absence of accurate definition, a prior can be used to prevent by the same way from any spurious detection of sea-ice at high latitudes. An empirical prior was found from the Sea Surface Temperature. Then, the likelihood can be estimated from an empirical knowledge of the NRCS distribution (GMF). In this algorithm, the mean behavior of NRCS is first parametrized, then the uncertainty is quantified to define an efficient and consistent flag definition. Geophysical Model Functions can be derived based on our knowledge of theoretical backscatter models. For open water, both an incidence angle and a wind speed dependency are retained, based on the classical geometrical optics approximation. For sea-ice, the estimation is based on the common model of the sum of surface scattering and a volume scattering terms.
The comparison of sea-ice detection based on this algorithm with model and other sensors data shows up to 98% accuracy. The Bayesian prior prevent from quality degradation for data out of polar areas. Comparison with external remote sensors outputs shows better performance of the derived sea-ice flag than exiting flag. Areas of improvement are identified for this algorithm, such as accounting for the dependency of the standard deviations on sea state, or sea-ice GMF refinement with sea-ice types inputs. SWIM sea-ice flag products are plan to be produced and provided to users in the CFOSAT products.
4. CONCLUSION
SWIM instrument, by its design, offers great possibilities for sea-ice analysis. Thanks to NRCS profiles measured at low incidences, between 0 and 10 degrees it can be a complementary set of data for sea-ice characterization such as sea-ice type/age, this can also be combined with information obtained with the nadir data processed on SWIM. Concerning sea-ice extent, a first application exploiting those data has been established with a sea-ice detection algorithm showing conclusive results, that will lead to a first dedicated SWIM sea-ice product.
SWIM data thus show a strong potential to exploit in order to identify further application on sea-ice areas. An additional research fields is the impact of waves into sea-ice. In this area, the directional wave spectrum can indeed be estimated.
REFERENCES
[1] Hauser D. et al., “Overview of the CFOSAT mission”, IGARSS’2016, Beijing (China), July 2016
[2] Liu Jianqiang, Wenming Lin, Xiaolong Dong, et al, « First Results From the Rotating Fan Beam Scatterometer Onboard CFOSAT », 10.1109/TGRS.2020.2990708, 2020
[3] Hauser D., et al, SWIM: the first spaceborne wave scatterometer, 10.1109/TGRS.2017.2658672, 2017
[4] G. S. Brown, "The average impulse response of a rough surface and its applications", IEEE Trans. Antennas Propagai., vol. AP25, pp. 67-74, Jan. 1977.
[5] Hauser D. et al, “New observations from The SWIM radar on board CFOSAT; instrument validation and ocean wave measurement assessment”, doi 10.1109/TGRS.2020.2994372, 2020
[6] C. Tourain et al., "Benefits of the Adaptive Algorithm for Retracking Altimeter Nadir Echoes: Results From Simulations and CFOSAT/SWIM Observations," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3064236.
[7] Peureux C. et al., Sea-ice detection from near-nadir Ku-band echoes of1CFOSAT/SWIM scatterometer. Journal of Geophysical Research: Oceans: Submitted
[8] Giles, Katharine A. et al. “[Combined airborne laser and radar altimeter measurements over the Fram Strait in May 2002.” Remote Sensing of Environment 111 (2007): 182-194.
[9] Landy, J. C., Tsamados, M., & Scharien, R. K. (2019). A Facet-Based Numerical Model for Simulating SAR Altimeter Echoes From Heterogeneous Sea Ice Surfaces. IEEE Transactions on Geoscience and Remote Sensing, 57(7), 4164-4180. [8625441]. https://doi.org/10.1109/TGRS.2018.2889763
Add your information, graphs and images to this section. We present an algorithm for computing ice drift in the marginal ice zone (MIZ), based on partial shape recognition. With the high spatial resolution of Sentinel-1 and Sentinel-2 images, and the low sensitivity to atmospheric influences of Sentinel-1, a considerable quantity of ice floes is identified using a mathematical morphology method. Hausdorff distance is used to measure the similarity of segmented ice floes. It is tolerant to perturbations and deficiencies of floe shapes, which enhances the density of retrieved sea ice motion vectors. The algorithm can be applied to sequential images from different sensors, and was tested on two combined image mosaics consisting of Sentinel-1 and Sentinel-2 data acquired over the Fram Strait; the algorithm successfully produced pairs of matched ice floes. The matching result has been verified using the shape and surface texture similarity of the ice floes. Today, an enormous amount of Earth observation data is freely available for mapping Arctic Sea ice in the MIZ, including active and passive high spatial resolution satellite missions, such as Sentinel-1 (S-1) and Sentinel-2 (S-2). Sea ice drifting, however, ham-pers the registration of images from different sensors, and impedes long-term monitoring of individual floes. Thus, to overcome this limitation of sea ice monitoring, fine (pix-el-to-pixel) sea ice motion information is required. In this paper, we propose an algorithm (PHD) with two main objectives: (1) mapping the drift of discrete floes in the MIZ based on partial shape matching, and (2) pixel-to-pixel registration of ice floes in S-1 and S-2 images. The proposed approach provides information about ice drift on a predefined grid, or a Lagrangian observation, of each ice floe. One of the major advantages of using the Hausdorff distance approach is that it can match ice floes with partial shape similarity, which improves the algorithm’s agility for ice drift-retrieving from images with ice floes of changing shapes due to melting or the building of leads and ridges.
Ice sheets store about 90% of the freshwater of the world's oceans and, therefore, they can significantly contribute to sea-level rise. The Antarctic ice sheet is the largest single mass of ice on the planet. Hence, synoptic monitoring of the Antarctic ice sheet is of utmost scientific and operational importance [1, 2]. One of the largest and most important Antarctic ice sheets is the 140 km long Drygalski Ice Tongue (DIT) in Victoria Land, East Antarctica, which extends 90 km into the Ross Sea [3]. DIT plays an important role in the Southern Ocean circulation and the formation of the Terra Nova Bay polynya [4]. All this matter suggests that getting updated and continuous information on DIT evolution over time is of crucial importance for environmental monitoring purposes.
In this context, satellite synthetic aperture radar (SAR) is a key instrument that allows overcoming most of the measurement issues that characterize the study area, i. e., harsh environmental conditions and lack of solar illumination, and clear-sky weather conditions. This study aims at analyzing the time variability of the DIT coastline using a time series of Sentinel-1 Interferometric Wide Swath SAR imagery. The analysis focus on interesting features that include the DIT coastal front and some meaningful surface fractures that appear in the considered period along both sides of DIT. Accordingly, those features are tracked over time to infer information about the DIT morphological structure evolution, including the displacement of some reference points and the annual mean surface velocity of DIT fractures.
The analysis is carried out according to the following 3-step methodology [5]: 1) pre-processing the HH-polarization SAR images (radiometric calibration, speckle filtering with a 7x7 boxcar filter, geocoding); 2) coastline extraction to extract the continuous boundary between DIT profile and the surrounding ice-free/ice-infested water using global threshold constant false alarm rate approach and an edge detector based on the Canny kernel; 3) evaluation of some objective metrics that allow quantifying DIT time variability.
Preliminary results obtained processing a dataset consisting of 12 Sentinel-1 SAR scenes collected in the time-period 2016-2021 during March and April show that: a) the proposed methodology is effective and accurate in extracting the DIT coastal profile; b) the features of the DIT move seaward over time, with the southern (northern) side of DIT calling for no remarkable change (non-negligible change) in their shape; c) some rifts appear along the northern side of DIT in recent years; d) fractures are characterized by a mean surface velocity of approximately 600 meters per year.
1. B. Lambert and D. G. Long, “Monitoring changes in the Antarctic Ice Sheet from 1978 to 2007”, IEEE International Geoscience and Remote Sensing Symposium, 2008.
2. F. Remy and P. Soazig, “Antarctic ice sheet and radar altimetry: A review”, Remote Sensing, 1 (4): 1212-1239, 2009.
3. M. Frezzotti and M. C. G. Mabin, “20th-century behavior of Drygalski Ice Tongue, Ross Sea, Antarctica”, Annals of Glaciology, 20: 397-400, 1994.
4. C. Stevens, W. S. Lee, G. Fusco, S. Yun, B. Grant, N. Robinson, and C. Y. Hwang, “The influence of the Drygalski Ice Tongue on the local ocean”, Annals of Glaciology, 58 (74): 51-59, 2017.
5. A. Buono, F. Nunziata, L. Mascolo, and M. Migliaccio, “A multi-polarization analysis of coastline extraction using X-band COSMO-SkyMed SAR data”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7 (7): 2811-2820, 2014.
ABSTRACT : In this work we focus on the estimation of sea surfacecurrent using Automated identification system (AIS) datastreams in the Mediterranean sea. We propose to use deep learning techniques to solve the associated ill-posed inverseproblem, for methodological purpose we compare two different approaches, the first one relies on a physical constrained unsupervised technique whereas the seconds exploit a supervised framework and a dataset of in-situ observation from HFRadar. Performances are evaluated using ground-truth mea-surement provided by drifting buyos and HF Radar over area of the Sicily channel. We show that both AIS-derived product outperform satellite-altimetry derived ones in terms of recon-struction performance. When comparing the two learning framework, the use of supervised learning algorithms leads to the best performances.
INTRODUCTION : A better understanding of upper ocean dynamics is of a key importance for a wide range of applications such as climate modeling, wave forecasting [1], or maritime ecology. On a global scale sea surface current are derived thanks to a geostrophic assumption and observations of sea surfaceheight (SSH) provided by satellite altimeter. However SSH observations remains scarce which made the reconstructionof submesoscale dynamics challenging. This motivate the assimilation of data coming from other sensors : some effortshave been done [2] for including observed quantities such as sea surface temperature (SST) or salinity to improve the reconstruction of sea surface current but these remains indirect observations.As suggested by recent studies [3, 4, 5], the worldwide monitoring of the maritime traffic through the Automatic Identification System (AIS) could provide new means to complement the above-mentioned remote sensing technologies. The movement of a ship being not only determined by the considered routing but also affected by sea surface wind and current conditions, one may regard ships as potential samplers of sea surface currents. Given the intrinsic features of AIS data streams, especially the non perfectly-linear relation ship between the movement of the vessel and the current and the irregular and possibly corrupted space-time sampling of AIS data, solving this inverse problem is a complex issue.
Here we focus on a real case study : the area of the Sicily Channel. It is a place of high-density maritime trafic which benefits from HF radar observation : This makes this area suitable for testing a supervised learning framework while providing confident validation metrics.
Our key contributions are as follows:
• We confirm on a new real case study the relevance of AIS for the reconstruction of sea surface current
.• We show that deep learning based methods outperforms Optimal interpolation OI based ones.
• We show that for an existing case study, a supervised-learning approaches leads to best performances in terms of recontruction performances.
[1] Fabrice Ardhuin, Sarah T. Gille, Dimitris Menemen-lis, Cesar B. Rocha, Nicolas Rascle, Bertrand Chapron,Jonathan Gula, and Jeroen Molemaker, “Small-scaleopen ocean currents have large effects on wind waveheights,”Journal of Geophysical Research: Oceans,vol. 122, no. 6, pp. 4500–4517, 2017.
[2] Leonid I. Piterbarg, “A simple method for computingvelocities from tracer observations and a model output,”Applied Mathematical Modelling, vol. 33, no. 9, pp.3693–3704, 2009.
[3] Daisuke Inazu,Tsuyoshi Ikeya,Takuji Waseda,Toshiyuki Hibiya, and Yoshinori Shigihara,“Mea-suring offshore tsunami currents using ship navigationrecords,”Progress in Earth and Planetary Science, vol.5, no. 1, pp. 38, Aug. 2018.
[4] C. Le Goff, B. Boussidi, A. Mironov, Y. Guichoux,Y. Zhen, P. Tandeo, S. Gueguen, and B. Chapron, “Mon-itoring the Greater Agulhas Current with AIS Data In-formation,” 2021, preprint.
[5] Simon Benaïchouche, Cl ́ement Legoff, Yann Guichoux,Franc ̧ois Rousseau, and Ronan Fablet, “Unsupervisedreconstruction of sea surface currents from ais maritimetraffic data using trainable variational models,”RemoteSensing, vol. 13, no. 16, 2021.
The main contributions of this study are: 1. it produces a classified TerraSAR-X ScanSAR (TSX SC) time series covering the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition, which provides valuable knowledge of sea ice conditions surrounding the ice camp; 2. it provides reference of incident angle (IA) dependencies of different ice types for TSX SC data over winter Arctic sea ice; and 3. it shows an effective way of including image texture features into TSX SC sea ice classification.
During the one-year-long MOSAiC expedition from 2019 to 2020, the icebreaker Polarstern drifted with sea ice in the Central Arctic to conduct multi-disciplinary research on the climate system. Satellite data acquisitions from multiple platforms were coordinated to survey the broader sea ice area surrounding the ice camp, providing opportunities for continuous large-scale evaluations. The classification of sea ice types is an important basic representation of sea ice conditions that supports numerous further analyses, e.g. monitoring ice break-up and lead formation, inferring the occurrence of sea ice deformation, studying ice-associated and under-ice ecology, and as input to sea ice and climate models, etc. Among the many Synthetic Aperture Radar (SAR) platforms and data types, TSX SC data provides daily X-band (9.65 GHz) imaging with good spatial resolution (~1.70 to 3.49 m in ground range) and coverage (~100 by 150 km), while providing coverage of Polarstern throughout the expedition, and is therefore a good data source for long-term investigation of sea ice development for MOSAiC.
This study aims to provide a reliable sub-weekly time series of TSX SC sea ice classification maps covering the expedition. To achieve this goal, a classification method considering per-class IA dependencies is used, and IA dependencies of sea ice classes for TSX SC data are presented. Due to the limitation of single polarization (only HH for TSX SC), this study further aims to investigate the feasibility of including image texture features into the classification.
This study analyzes a sub-weekly (2 scenes per week) winter (October to April) TSX SC time series acquired during the expedition. The examination of IA dependencies is achieved using manually derived polygons of ice types, with reference to near-coincident C-band SAR data (Sentinel-1 ExtraWide and Radarsat-2 Fine Quad-pol).
The classification method used in this study is the Gaussian Incident Angle (GIA) classifier [1], which directly incorporates per-class IA dependencies into a Bayesian classifier, replacing the constant mean vector of the Gaussian probability density function with a linearly variable mean. IA dependencies of different ice types in TSX SC (HH) are shown to be weaker than those for C-band data, but their intensity changes with IA are still statistically significant (except for the ‘leads’ class) and are corrected per-class by the GIA classifier. However, only an HH band is available as input to the classification, limiting the classifier’s ability to distinguish between class pairs having similar intensities, most notably rough young ice and multiyear ice (MYI). Also, the same ice types can have vastly different HH intensities due to different surface characteristics, e.g. the presence of remnants of deformation features on MYI, and different growth stages of frost flowers on young ice, etc.
To remedy this issue, this study proposes a statistically based workflow to include Gray-Level Co-occurrence Matrix (GLCM) texture features into the classification. In the logarithmic (dB) domain, TSX SC HH GLCM textures generally have a linear relationship with IA, allowing for direct utilization as features in the GIA classifier. The optimal window size and set of texture features to use are statistically derived to best separate ambiguous class pairs using two commonly used separability measures: Transformed Divergence (TD) and the Jeffries-Matusita Distance (JM). A total of 18 GLCM texture features are analyzed, and a rating system is developed to find the set of texture features having the best separability between class pairs at the smallest possible window sizes, while having minimal inter-correlations. Finally, two approaches are tested for their performance in reducing isolated artifacts created by the inclusion of texture features: (a) a local majority filter is applied after the classification; (b) within the classification process, a Markov Random Field (MRF) de-noising component is added to alter the posterior class probabilities yielded from the GIA classifier.
Qualitative (visual examination of classification maps) and quantitative (using validation polygons) assessments show that this workflow brings considerable improvements in classification performance compared to classification of the HH channel only. TSX SC classification results utilizing texture features are similar to C-band SAR classifications using HH and HV channels, and demonstrates that the inclusion of texture features is essential in the classification of single-polarization TSX SC data. The MRF de-noising approach yields significantly better results than post-classification local majority smoothing in terms of preserving detailed class boundaries while removing artifacts from texture calculation.
Following the classification, a time series of areal fractions of each ice type is produced from the classification maps, providing a general assessment of relative changes of ice types through the study period. The time series used in this study covers the freeze-up period and the core of the winter season, coincident with in-situ acquisitions of autonomous ice buoys, ice thickness measurements and airborne laser scanner (ALS) data in the MOSAiC expedition. Accordingly, the classified time series is compared to sea ice deformation area estimates derived from SAR-based drift estimates, and in-situ and airborne ice thickness and roughness measurements. This allows us to both asses the classification results and infer larger-scale development of ice deformation features including leads and ridging areas through the time series. Altogether, this study provides a valuable perspective on the changes in sea ice conditions surrounding the MOSAiC ice camp, and an important basic dataset available for use in future studies on MOSAiC sea ice.
[1] Lohse, J., Doulgeris, A.P. and Dierking, W., 2020. Mapping sea-ice types from Sentinel-1 considering the surface-type dependent effect of incidence angle. Annals of Glaciology, pp.1-11.
Trends in the extent and properties of Arctic and Antarctic sea ice packs as well as Arctic seasonal snow cover alter the surface albedo of the polar regions, causing a disturbance in the shortwave top-of-atmosphere energy balance through the snow and ice albedo feedback (SIAF). Recent progress in satellite-based remote sensing has opened new avenues for quantifying SIAF through the combination of multidecadal surface albedo estimates with radiative kernels derived from CERES or the CloudSat-CALIPSO instrument pair. Being primarily based on observations, these methods and data offer a cohesive look into the development of SIAF over the past decades over both polar cryospheres.
Here, we present the analysis of surface albedo estimates from the CM SAF CLARA-A2.1 data record,from AVHRR, spanning 1982-2018, combined with the aforementioned radiative kernels over both Arctic and Antarctic. Trends in SIAF are presented and discussed, paying particular attention to the SIAF impact of the recent Antarctic sea ice losses in the 2016-2018 timeframe. While partial recovery has now been observed in the Antarctic ice pack, the significance of the ice losses in 2016 underline the importance of the Antarctic sea ice to the global cryosphere.
The complete results of the study are available at:
Riihelä, A., Bright, R.M. & Anttila, K. Recent strengthening of snow and ice albedo feedback driven by Antarctic sea-ice loss. Nat. Geosci. 14, 832–836 (2021). https://doi.org/10.1038/s41561-021-00841-x
Pressure in ice and pressure ridges (individual narrow linear features consisting of broken ice fragments above and below the water surface) pose the main threat for ships and industrial facilities in the Arctic. From a geophysical perspective, the deformation features such as ridges and leads determine ice mass balance. Ice deformation is also a big challenge in sea ice modelling and is crucial for sea ice rheology development.
Our study aims at developing robust detection of ridges and ridge clusters from Synthetic aperture radar (SAR) images acquired by Sentinel-1. Due to the overlap of backscattering intensities from ridged and level ice, the capability to distinguish unambiguously between them by simple cut-off criteria is very limited. The deformed features like new leads with frost flowers, edges of leads, brash ice may have a similar SAR signature as ridged ice. These ambiguities and the fact that SAR backscattering from sea ice is not fully understood, improving the robustness of identification of sea ice deformation features depends on using complementary data.
We propose a synergistic integration of SAR image texture characteristics and ice drift data to extend the potential and usage efficiency of Sentinel-1 data for retrieval and modelling of parameters characterizing the deformation state of sea ice. Our study solves both practical and scientific problems. We examine the applicability of machine learning for ridges and ridge clusters detection and modelling in first-year pack ice. In particular, our objective is to develop a forward model for what we call a "microstructure" of a SAR image. "Image microstructure" is a statistical characterization of intensities of pixels concerning their spatial location and neighbourhood. The model is a quantitative link between three components: sea ice deformation, ridges, SAR image textural properties, which is based on an unprecedented combination of spatial (image texture) and temporal (drift, deformation) analyses of the SAR signal. The forward model for SAR image microstructure is used in assimilation of sea ice ridges into the state-of-the-art sea ice model with realistic rheology - neXtSIM.
Satellite altimetry can be used to detect sea surface signatures of high-frequency ocean phenomena, e.g. Internal Solitary Waves (ISWs) and ocean fronts, which are especially evident in radar backscatter (sigma0) and some geophysical parameters such as the Significant Wave Height (SWH). ISWs in particular change the sea surface by alternating rough and slick sections which affect typical geophysical parameters, e.g. when decreasing or increasing the radar backscatter. These shorter-scale ocean phenomena could benefit from tandem missions such as Jason-3 and Sentinel-6. Sentinel-6 was launched in November 2020, and during the Tandem mission, it is positioned on the same orbit as Jason-3, lagging 30 seconds behind it. After the recent Agencies’ decision to extend the Tandem flight configuration until end of March 2022, we would be soon in position to exploit more than one year of nearly colocalized S6-MF/J3 datasets, and compare Sentinel-6 data with the same parameters of the conventional Jason-3 radar altimeter in order to find synergy cases between the two satellites and discover patterns concerning ISWs and sea surface signatures in SWH. Therefore, the synergy cases can be used to detect roughness patterns with different signatures in the two data sets and to find systematic differences between calibrations of the two altimeters in question.
In this study, we select five regions of the world’s oceans, namely the Banda Sea next to Indonesia, the South China Sea, the Andaman Sea, the tropical Atlantic Ocean off the Amazon shelf, and the Sulu Sea next to Philippines. In each region, we check for possible cases of ISWs in selected ground tracks. Furthermore, comparisons with images from various optical sensors such as Ocean Land Colour Imager (OLCI) from Sentinel-3A, MSI onboard Sentinel-2, MODIS-AQUA and MODIS-TERRA, as well as SAR images from Sentinel-1, were made to provide additional evidence of the ISWs surface signatures, namely their rough and slick patterns. Note that sea surface manifestations of ISWs have been detected recently in the high-resolution radar altimeter data from the Jason mission (Magalhaes and da Silva, 2017) and the European Sentinel-3 satellite, called SAR radar altimeter (SRAL), which achieves high spatial resolution in the along-track direction of the satellite by using the SAR processing technique (Santos-Ferreira et al., 2018, 2019; Zhang et al., 2020). However, the effects of enhanced surface wave breaking on SRAL backscattered signals are still poorly undocumented. On the contrary to Sentinel-6, Jason-3 does not provide equal precision in 20Hz measurements for SWH data, and here we can find some differences on how each phenomenon appears on the along-track record. Also, we demonstrate that the passage of an ISW can create increase and decrease patterns of SWH. The sensitivity of the two radar altimeter instruments to ISW surface manifestations is discussed.
Copernicus Sentinel-3 Sea (and sea-Ice) Surface Temperature: product status, evolutions and projects
Anne O’Carroll, Gary Corlett, Igor Tomazic
EUMETSAT, Eumetsat-allee 1, 64295 Darmstadt (Germany)
Sea surface temperature (SST) is a fundamental physical variable for understanding, quantifying and predicting complex interactions between the ocean and the atmosphere. Such processes dictate how heat from the sun is redistributed across the global oceans, directly impacting large- and small-scale weather and climate patterns.
The first Copernicus Sentinel-3 satellite was launched on 16th February 2016 and the second on 25th April 2018. One of its main objectives is to observe very accurate Sea Surface Temperature (SST) from the Sea and Land Surface Temperature Radiometer (SLSTR). These highly accurate SSTs provide a reference satellite SST dataset and time-series for other satellite SST missions and are important for climate monitoring.
Operational SLSTR SST products have been distributed from the EUMETSAT marine centre since 5th July 2017. EUMETSAT performs ongoing validation activities for SLSTR SST, together in coordination with the Sentinel-3 validation team, and real time monitoring is shown from the link to metis.eumetsat.int. Validation results show the products performing extremely well, and dual-view SSTs are recommended to be able to be used as a reference SST source.
The ongoing validation activities are important for assessing and maintaining SLSTR SST product quality. In addition to inter-comparisons with other satellite SST, key components are collocations and analyses with drifting buoy SSTs. A Copernicus-funded EUMETSAT project called ‘Towards Fiducial Reference Measurements (FRM) of Sea-Surface Temperature by European Drifters’ (TRUSTED) is now in its fifth year. 150 high-resolution drifting buoys (HRSST) with the design of SVP-BRST have so far been deployed. Activities continue to assess and validate these reference buoys as FRM for SLSTR together in coordination with the HRSST Task Team of the Group for High Resolution Sea-Surface Temperature (GHRSST). Activities have begun towards the requirements, design and prototype of sea-ice surface temperature drifting buoys needed for the validation of Copernicus satellite sea-ice surface temperature products in development and preparation for operations in the next few years.
EUMETSAT have begun activities towards a revised and improved algorithm for SLSTR SST with the intention of the operational implementation of SLSTR day-2 SST in 2024. This will include improvements to the Bayesian cloud-screening in coastal zones and an operational implementation of sea-Ice Surface Temperature (sea-IST) from SLSTR. A demonstrational SLSTR sea-IST product now available through WEkEO for interested expert users for assessment and feedback, with further improvements on cloud-screening and validation in progress.
Further ongoing projects and evolutions relating to marine Surface Temperature at EUMETSAT will also be presented including: the activities of GHRSST; inter-comparisons of surface temperatures with FRM over Lake Constance; relevant evolutions of the SLSTR level-1 products and routine level-1 activities to ensure product quality for level-2 is maintained.
The microwave measurements of the ocean and marine cryosphere to be carried out by the Copernicus Imaging Microwave Radiometer (CIMR) satellite will contain information about a number of sea ice and snow variables. In order to better assess the potential of retrieving these variables from the future CIMR measurements we conducted a number of sensitivity experiments with the Snow Microwave Radiative Transfer Model (SMRT).
We examined the ability of SMRT in simulating snow and ice emissivity signatures within reasonable ranges of input variables. The analysis was conducted for first-year ice (FYI) and multiyear ice (MYI) individually and for horizontal (H) and vertical (V) polarizations respectively at frequencies between 6.9 and 36.5 GHz.
The sensitivity studies are conducted using a two-layer model consisting of an isotropic layer of ice with an isotropic layer of snow on top. Following these, a number of multi-layer experiments were conducted as well.
To tune the model, we use a dataset of collocated data from Operation IceBridge (OIB) in the period 2013-2019, brightness temperatures measured by the Advanced Microwave Scanning Radiometer 2 (AMSR-2), C-band backscatter from the Advanced Scatterometer (ASCAT) and ERA-Interim weather reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF).
Based on the measured OIB and AMSR-2 data, average values of known variables (surface temperature, snow thickness, ice thickness) are determined and used as input to the model. The average value of unknown variables (snow density, correlation lengths in ice and snow, surface roughness and ice porosity) are estimated using the sensitivity studies and by comparing the SMRT model output to the AMSR-2 brightness temperatures (TB). It is found that the model with reasonable values for the ‘unknown’ variables can predict the measured TBs well, with average absolute biases of 5.0K for MYI V, 1.98K for MYI H, 0.55K for FYI V and 0.96K for FYI H.
Subsequently an individual point analysis is conducted. It is found that a one-layer model predicts individual points well for FYI. For MYI the one-layer model does not capture the physical processes properly. The analysis showed linear trends with modelled TBs and sea ice thickness and snow depth, that are not seen in the measured data. It is found that using a model with 5 layers of snow and 15 layers of ice provides significantly better results than the 1-layer model due to the ability to capture temperature variations within the ice and snow.
Large variations in TB for MYI due to porosity, correlation length and salinity pose a significant problem, namely that all three variables are poorly known and difficult to measure. This gives the flexibility to achieve good correlations between measured and modelled TBs, but whether the correlations arise from using the right variables is still uncertain.
Obtaining more certainty in the modelling result will therefore require more knowledge about porosity, salinity and correlation length of MYI. During the MOSAIC expedition significant amounts of measurements of MYI salinity and porosity have been conducted. Examination of results from this expedition will provide us with more knowledge, that can be used to narrow down the spans of the unknown variables as the next steps in this research.
The increasing number of open water areas and thinning sea ice has significant impacts on the energy exchange between the atmosphere and the ocean and on sea ice dynamics in the Arctic Ocean. In particular, polynyas or leads are not permanently open, but are partially frozen over and covered by a thin layer of ice up to a thickness of about 25 cm. The surface temperature of so-called thin ice is between open water and thicker sea ice surfaces, which modulates the heat exchange between the ocean and the atmosphere. This influence of thin ice on the heat fluxes complicates for example the simulation of climate models or forecasting systems. In order to support model simulations and ship operations, various Earth Observations sensors are used to routinely monitor thin ice surface extent ̶ primarily using passive microwave or side-looking SAR missions.
In the present investigation, a classification scheme is presented which detects open water, thin ice, and thicker sea ice surfaces using an unsupervised, artificial learning approach solely based on Cryosat-2 Synthetic Aperture Radar (SAR) altimeter reflections (waveforms). The unsupervised classification approach identifies similar patterns among a subset of randomly collected waveforms and groups them into a specific number of classes without the use of training data. Further, the classes are assigned to different surface conditions. Primarily this approach was designed to reliably identify water openings within the sea ice cover (Müller et al., 2017). However, extended investigations and improvements in the classification algorithm additionally enable the detection of thin ice surfaces.
The results of the classification approach are compared to thin ice thickness retrievals from MODIS thermal imagery. Here, we make use of the higher spatial resolution of MODIS compared to the frequently used passive-microwave approaches as well as the better spatial coverage compared to available SAR systems. The thin-ice thickness data are computed from MODIS ice-surface temperatures derived from the standard MOD/MYD02 radiances data using a surface-energy-balance model together with atmospheric reanalysis data from ERA5 and suitable overpasses are screened manually for cloud-cover artefacts and suitability (Paul et. al., 2015). Moreover, radar images of the ESA Copernicus mission Sentinel-1 are used to support the evaluation of MODIS and Cryosat-2 thin ice comparisons. In these comparisons, special attention is paid to minimal acquisition time gaps up to a maximum of 30 minutes and large spatial overlap areas, to reduce the impact of drifting sea ice, changing thin-ice conditions and strong air temperature differences.
The presented results show how the monitoring of polar oceans can be improved, and contribute to the knowledge about the Arctic ice cover, specifically observing the full sea ice thickness distribution. Moreover, the Cryosat-2 classification can support the development of improved waveform retracker algorithms enabling a more reliable estimation of the sea ice freeboard or sea level in the polar seas.
To sum up our contribution includes the presentation of:
• Thin ice detections by an unsupervised classification approach using Cryosat-2 waveforms
• Waveform-derived characteristics w.r.t. a changing thin ice thickness
• Thin ice thickness retrievals gathered from MODIS thermal imagery
• Visual and quantitative comparisons of both datasets close in time and space
Paul, S., Willmes, S., and Heinemann, G.: Long-term coastal-polynya dynamics in the southern Weddell Sea from MODIS thermal-infrared imagery, The Cryosphere, 9, 2027–2041, https://doi.org/10.5194/tc-9-2027-2015, 2015
Müller, F. L., Dettmering, D., Bosch, W., and Seitz, F.: Monitoring the Arctic Seas: How Satellite Altimetry Can Be Used to Detect Open Water in Sea-Ice Regions, Remote Sensing, 9, https://doi.org/10.3390/rs9060551, 2017
Snow on the Antarctic sea ice is an integral part of the polar climate system. Unlike in the Arctic, the combination of heavy snowfall and thin ice around Antarctica results in widespread flooding, pushing the sea ice freeboard below sea level, thus provoking snow-ice formation and further increasing the sea ice mass. Besides, precipitation and harsh atmosphere condition further complicate the snow properties, such as grain size and snow density. Such complex snow stratigraphy significantly alters (1) the snow thermal conductivity and hence modulates ice mass and basal thermodynamic growth, and (2) atop optical properties or albedo of the snowpack, thus determining the absorption of solar radiation and under-ice biota number. Here we investigate the effects of the snow stratigraphy on the snow thermal properties grounded on the L-band microwave satellite to better represent surface brightness temperature (TB). To do so, we combine multiple snow depth and sea ice thickness datasets in the Weddell Sea: (1) Ice mass-balance buoy (IMB) and snow buoys 2016S31 during 2016-2017, and PS81/506 and PS81/517 during 2013-2014; (2) NASA’s Operation IceBridge (OIB) airborne campaign during 2010-2016; and (3) Ship-based measurement from Antarctic Sea Ice Processes and Climate program (ASPeCt) during 2010-2016. Notably, the diagnostic radiation model is first connected with the multi-layer snow cover SNOWPACK, a reanalysis-forcing and physics-based prognostic snow stratigraphy model. Besides, air temperature from buoys and sea ice concentration are also incorporated into the above joint schemes. We first determine the snow stratigraphy along the three buoy trajectories using SNOWPACK and identify snowmelt-refrozen layers caused by massive synoptic snowfall events. The appearance of these layers coincides with sudden drops in satellite-observed TB from the Soil Moisture and Ocean Salinity (SMOS) and Soil Moisture Active Passive (SMAP) satellites, indicating the substantial effects of flooding and brine drainage from ice to snow on the surface thermal properties in winter. The dielectric properties and permittivity of the saline melt layer are then parameterized and further added in the forward TB simulated model, improving the representation of surface conditions in the Weddell Sea, increasing R2 and best-fitting slope when compared to the observations from OIB and ASPeCt. Our results highlight the importance of detailed melt-refrozen layer and their parameterization in modeling TB over sea ice. This improved forward TB model enables better retrievability of sea ice thickness from SMOS and SMAP and forthcoming Copernicus Imaging Microwave Radiometer (CIMR) L-band missions around Antarctica. However, due to the limited number of ice-related observations in the Southern Ocean, especially in winter, we argue that more extensive expeditions are urgently needed before more accurate snow properties can be configured and introduced in retrieval algorithms.
Sea ice concentration, extent and thickness is a major control on heat exchange between the ocean and atmosphere and also presents a significant challenge to shipping in high-latitude regions. In areas where time series of spaceborne passive microwave observations exist, regional-scale analysis of sea ice concentration can be readily achieved. However, the relatively coarse (~3-25 km) spatial resolution of these data do not permit detailed investigations of sea ice conditions, including open water (lead) development. Here, we present a new methodology utilising moderate-resolution satellite imagery to characterise daily- to decadal-timescale sea ice cover over regional scales. Cloud contamination is minimised by exploiting daily mosaics of NASA MODIS (Terra and Aqua) and NASA/NOAA VIIRS (Suomi-NPP) imagery, and through averaging resultant sea ice concentration products over sub-weekly to multi-decadal timeframes. This technique yields a significantly finer depiction of sea ice conditions than that afforded by satellite passive microwave data, and is shown to be in excellent agreement with high-resolution, all-weather imaging ESA/Copernicus Sentinel-1a/b and DLR TerraSAR-X synthetic aperture radar (SAR) observations using examples from the Weddell Sea, Antarctica. Our technique provides a framework for quantifying daily- to multi-decadal-scale sea ice information, bridging the gap between coarse resolution regional passive microwave and localised, high spatial resolution SAR and optical imagery.
Arctic sea-ice is changing fast, and its retreat and thinning has a great impact on Arctic heat fluxes, ocean currents and ecology. The ice is thinning and the first-year ice area is growing in extent. The L-band (1.4 GHz) radiometry can measure the sea ice thickness when ice is thin (in general less than 0,7 m, when ice is salty), being a complementary measurement to radar altimeters which can measure thickness with good accuracy mainly for thick ice .
During the MOSAIC expedition, we deployed the mobile ARIEL L-band radiometer to study the sensitivity of the L-band to different sea ice parameters (snow and ice depth, ice salinity, temperature of the ice and snow, etc.), and to validate the current emissivity models.
ARIEL is sensitive to different types of surfaces, i.e., ice, leads and melt ponds, and very sensitive to ice thicknesses up to 2m when the salinity of the ice is low (bellow 2 PSU). We also show that the measurements follow the results of the Burke model, when using as input the in situ snow and ice measurements. The correlation between ARIEL and Burke brightness temperatures are around 0.8. The error between them is about 5% depending on the measurement transects analysed. The dependence with snow depth is not evident, even though some dependence can be devised. A quantitative analysis also is presented for the thin ice observations on leads.
Therefore, the ARIEL radiometer is an excellent instrument to perform field campaigns and to gain knowledge on the sensitivity of the L-band radiometry to the ice and snow parameters. However, more measurements are needed to advance and improve the emissivity model of the ice and snow at that band.
The insights gained with this field data will permit to enhance the sea ice thickness retrievals from L-band radiometers satellite (SMOS and SMAP, and the future CIMR mission) and therefore gain insight on the Arctic sea-ice thickness changes at a larger scale.
Continuous advancements in satellite technology are improving the resolution of sea ice altimetry from space. Since October 2010, the European Space Agency (ESA) CryoSat-2 (CS2) radar altimeter has surveyed the Arctic Ocean up to 88°N. The synthetic aperture radar (SAR) altimeter on-board CS2 has a footprint along the flight direction of ~300 m, which is an order of magnitude improvement compared with conventional radar altimeters. A novel data processing technique for SAR altimeters known as fully-focused SAR (FF-SAR) processing further improves the along-track resolution of CS2, down to just a few tens of meters. In September 2018, NASA’s ICESat-2 (IS2) laser altimeter was launched. IS2 provides observations of the Polar regions with the same latitudinal coverage as CS2, and has an ~11 m pulse footprint.
Detailed and consistent observations of small-scale sea ice topography (such as floe length, surface roughness, and melt pond cover) are needed to understand changes in the Arctic sea ice cover, and implications for our global climate. Providing Arctic-wide observations of these features, which can vary significantly along an individual sea ice floe, still remains a major challenge. Here we use high-resolution FF-SAR CS2 and IS2 observations to survey Arctic sea ice in detail not previously achievable from space. We present preliminary results of the floe length distribution and melt pond fraction [Tilling et al., 2020] along-track from both satellites. We will address the strengths and drawbacks of each instrument for resolving these features, and potential associated biases. Where they are available, Cryo2Ice (near-coincident CS2 and IS2) data will be used to intercompare the high-resolution topography products.
Tilling, R., Kurtz, N. T., Bagnardi, M., Petty, A. A., & Kwok, R. (2020). Detection of melt ponds on Arctic summer sea ice from ICESat‐2. Geophysical Research Letters, 47, e2020GL090644. https://doi.org/ 10.1029/2020GL090644
In February of 2018, a large polynya formed to the north of Greenland, which induced extensive thermodynamic sea ice growth and dynamical ridging with its closing. Due to the large size of the polynya, it presents a unique chance to study the capabilities of various thickness retrieval methods, including L-band radiometry as well as radar altimetry. In this study we examine in detail the performance of CryoSat-2 during and after the development of the polynya, including: (1) L-band passive radiometry (SMOS or SMOS/SMAP); (2) radar altimetry with CryoSat-2; and (3) a prognostic thermodynamic model. Specifically, we examine the thickness retrieval over the refrozen polynya as well as multiyear ice (MYI) to the north of the polynya which experienced warming and snowfall precipitation with the polynya event. We find that the thick and ridged new ice within the polynya poses a particular challenge due to the lack of sensitivity on L-band brightness temperature as well as CryoSat-2 waveform classification. Also we found the reduced penetration of CryoSat-2 on the snow cover over MYI, probably due to snow surface/volume scattering.
Sea ice concentration (SIC) is an essential ice parameter for ice navigation, assimilation to numeric ice and weather models and in the form of time series for climate studies. At FMI SIC estimation algorithms based on SAR data have been developed. A Baltic Sea SIC estimation algorithm utilizing dual-polarized (HH/HV) C-band SAR imagery and AMSR2 microwave radiometer data applying a Multilayer Percrptron (MLP) neural network was introduced in Karvonen (2017). Recently, sea ice parameter estimation, especially SIC, and sea ice classification using convolution neural networks (CNN) have gained popularity and good results have been achieved. Also at FMI a CNN has been applied to SAR imagery to estimate Baltic Sea SIC in Karvonen (2021), and the results were very promising. Synthetic SAR patches with an exactly known SIC were used in training the CNN. The synthetic SAR patches are generated by combining SAR patches of 100% open water and 100% sea ice, based on the FMI ice charts. The open water and sea ice patches are selected, based on the ice chart SIC, such that they with a very high probability really fully consist of either open water or sea ice, e.g. avoiding to include patches near the boundaries of ice chart polygons with different assigned SIC values. Utilizing synthetic SIC training data the known uncertainties of ice chart SIC in the mid-range SIC values (10-90%) can be avoided.
Here the CNN model of Karvonen (2021), using SAR data alone as its input, is extended to include the AMSR2 microwave radiometer (MWR) brightness temperature information in the form of polarization and gradient ratios. As noticed in Karvonen (2017) feeding polarization and gradient ratios into a neural network speeds up the convergence of the neural network in SIC estimation.
The Baltic Sea ice season 2018-2019 dual-polarized (HH/HV) Sentinel-1 extra wide (EW) swath Ground Range Detected Medium (GRDM) mode SAR data were used in this study. 30 images were used for generating the training data and the remaining 620 images of totally 650 SAR images were used for testing the algorithm. The AMSR2 data corresponding (temporally closest) to the SAR imagery were also used as MWR data input in this study. The estimation results were compared to both synthetic SIC values and FMI ice chart SIC. Comparisons to the earlier FMI SIC algorithms is also provided. The results indicate that a CNN.based algorithm utilizing both SAR and MWR data as its inputs improves the SIC estimation over the Baltic Sea compared to using SAR data alone. Also this kind of high-resolution SIC estimates are well suitable for operational purposes, both for fully automatic operations and to support Baltic Sea ice charting. The algorithm is also applicable over other ice-covered waters (Arctic and Antarctic).
References:
J. Karvonen, Baltic Sea Ice Concentration Estimation Using SENTINEL-1 SAR and AMSR2 Microwave Radiometer Data, in IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 5, pp. 2871-2883, May 2017, doi: 10.1109/TGRS.2017.2655567.
J. Karvonen, Baltic Sea Ice Concentration Estimation From C-Band Dual-Polarized SAR Imagery by Image Segmentation and Convolutional Neural Networks, in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3097885.
Satellite based Sea ice observation has been subject to intense attention over the last few decades, recently also at higher resolution, thanks to wide spread availability of open access Synthetic Aperture Radar images. Besides the scientific interest in sea ice to understand the detailed geophysical properties of sea ice and its interaction with microwave signals, the operational aspect of high resolution (~100 m scale) ice charting is becoming important due to increasingly ice-free Arctic waters, resulting in growing navigational possibilities. As of now, widely used daily pan-Arctic sea ice concentration maps are derived from space-borne microwave radiometer data with a typical spatial resolution of dozens of kilometers, which are rather inadequate for navigational purposes. Since last few years, Sentinel-1a/b and recently Radarsat Constellation Mission (RCM) have been providing unprecedented spatial and temporal coverage over the entire Arctic in C-band with their respective ‘Wide Swath’ modes. Despite proven AI based sea ice classification achievements on ‘Wide Swath’ mode images where training data was generally derived manually due to the scarcity of ground truth information, a fully automated, operational classifier has not yet been established due to large variation in the geometry and bulk properties of sea ice, and incidence angle induced impacts. Here we propose a methodology for basin-wide (e.g. Baltic Sea, Eastern Greenland Sea) sea ice type retrieval using Sentinel-1 (EW, HH-HV) dataset, where we take advantage of the vast archive of existing operational ice charts to train an AI based algorithm.
The processing chain accounts for the thermal/systematic noise and incidence angle related effects. The proposed supervised classification algorithm consists of the following steps: The first step comprises pre-processing (noise removal, calibration and reprojection) and texture based (GLCM) feature extraction to build a 3-D array of 27 layers consisting of HH and HV backscatter, Incidence angle and GLCM based textural features (both from HH and HV) information. In the second step we collect the spatially and temporally overlapping ice charts from two different operational services, US National Ice Center for Arctic Wide ice charts and BSH ice charts for the Baltic sea, which are provided in shape file and S-411 format respectively. For generating the training labels, we utilize the ‘Stage of Development’ information present in the ice chart. In order to reduce the complexity, we merged together some closely related ‘Stage of Development’ types from the original charts to produce five classes of ‘Stage of Development’. Those five classes are Open Water/Ice Free/Leads, New Ice, Young Ice, First Year Ice and Old Ice. After spatially and temporally aligning the SAR images and rasterized ice charts, we extracted a comprehensive amount of training and testing data, a total of 100000 points from which 80% were used for training and 20% were used for testing (mutually exclusive). In order to increase the reliability of the training and testing dataset we only use ice charts produced within plus or minus 3 days of the SAR image acquisition.
The extracted training dataset is then used to train a TensorFlow based classifier. As the local neighborhood textural features are already calculated for each pixel in step 1, a shallow neural network with only one hidden layer is all that is required to classify each pixel from the 27 feature values. The implementation uses a Bayesian architecture, replacing the weights and biases of a traditional neural network with probability distributions, and making use of the efficient in-built Monte Carlo estimator. Training is achieved by minimizing the sum of the negative log-likelihood (in this case, the same as the cross-entropy loss) and the Kullback-Leibler divergence of the prior and posterior distributions for the weights and biases. The network output is probabilistic, driven by the internal variable distributions. The result is similar to averaging the results of a large ensemble of neural networks. When applying the classifier, the posterior is constructed using repeated passes of each pixel as input. This method has advantages over an equivalent deterministic network. The posterior distribution over the classes provides the means to access the uncertainty on the class determination for each pixel. Furthermore, the posterior for data beyond the scope of the training dataset has a very large variance allowing pixels that do not belong to any of the five classes to be flagged as such. The ice charts produce imperfect pixel labels; however, this problem is effectively countered by the limited capacity of the neural network and overall size of the training dataset.
An initial assessment of the proposed algorithm via the test dataset, which is mutually exclusive from the training dataset, shows around 70% of overall accuracy with respect to the ice charts. We plan to demonstrate year/season-around performance of the proposed methodology over Baltic Sea and Eastern Greenland Sea in the final paper.
Microwave sensors onboard polar orbit satellites are commonly used for sea ice monitoring at high latitude. Since 1992, numerous scatterometers data at C and Ku-bands are available, allowing to build time series of data for sea ice monitoring for both Arctic and Antarctic areas.
Backscatter data enable to discriminate sea ice from open ocean areas, also they can also be used for sea ice type detection (first year from multi-year sea ice in the Arctic), moreover, sea ice displacement maps can be built.
These applications were successfully realized using QuikSCAT data at Ku-band and we use CFOSAT CSCAT scatterometer data for this purpose. We will show first results over the poles using 2020-2021 et 2021-2022 winters data. Application on sea ice edge/ocean free of ice detection will be shown and examples of multiyear ice detection and sea ice displacement. Comparison with ASCATs scatterometer data at C-band, available at the same period, will be presented, keeping in mind that C and Ku-band data over the pole have different behavior.
Sea ice long-term qualified data are routinely processed at Ifremer/CERSAT and available for the scientific community since 1992 :
• backscatter maps from C and Ku-band scatterometer
• displacement maps with the joint use of radiometer data
• ice edge
• first year/multiyear detection
providing an exceptional basis for analysis and synthesis of long-term variations of the sea ice in the polar areas.
They are available through the CERSAT/Ifremer (http://cersat.ifremer.fr) but also they are part of the CMEMS project reanalysis datasets (called « multiyear products ») and the H2020 European project INTAROS system of systems of Arctic data. From the results using CFOSAT CSCAT data, these new data are to be added to these data collections. First results of the use of CFOSAT SWIM data over the poles will be presented if possible.
Sea ice topography, which is dominated by ice ridges, shear zones, rubble fields, and hummocks, is an essential parameter for modeling ice- atmosphere-ocean interactions. Synthetic aperture radar (SAR) has become an invaluable asset for monitoring polar regions thanks to its capability for providing continuous all-weather day/night imagery on a large spatial scale. The technique of interferometric SAR (InSAR) is employed to interpret topographic information on the earth’s surface. However, due to the dynamic nature of sea ice, retrieval of sea ice topography from InSAR are limited. Note that the sea ice topography in this abstract refers to the height of sea ice including the snow depth above the water level.
TanDEM-X is a single-pass SAR interferometer that offers for the first time the chance to estimate sea ice topography by the InSAR technique. However, the InSAR-derived digital elevation model (DEM) is actually a measurement of the phase center height. The height bias induced by microwave penetration into snow and ice leads to inaccuracy of InSAR-derived DEM. For sensors operating at X-band, penetration bias for sea ice ranges from zero to ∼1 m, depending on the sea-ice type, salinity, and temperature [1].
The penetration depth of InSAR signals can be inferred from interferometric volume decorrelation, which is one of the critical components of interferometric coherence. The volume decorrelation caused by backscatter contributions from different depths can be derived from the integral of an assumed vertical scattering distribution function. The investigation of vertical distribution functions for various scattering processes, known as the polarimetric-interferometry SAR (Pol-InSAR) technique [2], is widely applied in retrieving geophysical parameters from natural volumes. For sea ice topography, a two-layer plus volume model has been proposed to correct the penetration bias over the thick and deformed sea ice [3].
However, the presence of a saline layer at the snow-ice interface due to flooding complicates the application of the Pol-InSAR models to the snow-covered young ice. In the Antarctic, ice-surface flooding widely occurs because of the thicker snow layer loading on the thinner ice floes, especially for the younger ice with less buoyancy. In this case, seawater infiltrates into the snowpack, floods the ice surface, and creates a high-saline slush layer which may refreeze into snow ice. The main scope of the study is to assess the penetration bias between the InSAR phase center and snow-air surface and investigate the possibility to correct the penetration bias from the Pol-InSAR models for the snow-covered young ice.
A dedicated campaign between NASA’s Operation IceBridge airborne mission and the DLR’s TanDEM-X satellite mission was successfully conducted in the western Weddell Sea in Fall 2017 [4]. In our study, the penetration bias is estimated over the young ice area from the campaign data. Besides, we simulate the interferometric coherence with various penetration biases by selecting serval potential Pol-InSAR models. By comparing the simulated results with the observed interferometric coherence and penetration bias, this study assesses the potential of applying the Pol-InSAR technique to estimate the penetration bias over the specific type of ice (i.e., snow-covered young ice).
[1] Hallikainen, M., & Winebrenner, D. P. (1992). The physical basis for sea ice remote sensing. Microwave remote sensing of sea ice, 68, 29-46.
[2] Papathanassiou, K. P., & Cloude, S. R. (2001). Single-baseline polarimetric SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing, 39(11), 2352-2363.
[3] Huang, L., Fischer, G., & Hajnsek, I. (2021). Antarctic snow-covered sea ice topography derivation from TanDEM-X using polarimetric SAR interferometry. The Cryosphere, accepted.
[4] Nghiem, S., Busche, T., Kraus, T., Bachmann, M., Kurtz, N., Sonntag, J., ... & Neumann, G. (2018). Remote sensing of Antarctic sea ice with coordinated aircraft and satellite data acquisitions. In 2018 IEEE International Geoscience and Remote Sensing Symposium (pp. 8531-8534).
In the EisKlass2 project, a pre-operational service for supplying improved sea ice information based on data from the Sentinel-1 and Sentinel-3 satellites will be established to support navigation in ice-infested waters. This allows planning of safer routes and better avoidance of dangerous situations. In addition, the service allows for more cost-efficient navigation with less carbon emissions contributing to the European Green Deal.
Sea ice is constantly changing: wind and ocean currents can push together large ice masses and close leads of open water; the pack ice so formed is often not navigable even by icebreakers. With their combination of optical/thermal and radar sensors, the European Sentinel satellites offer the possibility to strongly improve the sea ice situation awareness. Radar data from Sentinel-1 show structures in sea ice in high resolution and independent of cloud cover. Different ice classes can mostly be distinguished by different radar backscatter, but some ice classes exhibit a similar backscatter, limiting the applicability of pure radar-based classification. Sentinel-3 data contain optical/thermal information of water, ice, and snow, allowing further reaching conclusions and refined ice class separation but are contaminated by clouds.
The SAR sea ice classification is based on a Convolutional Neural Network (CNN) classifier and currently discriminates 6 surface types. In contrast, the optical classification, developed in the EisKlass31 project, is continuous. It allows the distinction of 19 surface types of which 8 classes include open water and sea ice of thickness between 0-50 cm. In the classes involved with snow cover, properties of the snow show up which are formed by cold/warm metamorphoses. Both classifications will be fused by AI and the information generated this way show a more detailed sea ice classification than each sensor separately.
The observational impact of the new ice classification will be assessed in a modelling system that applies the quantitative network design approach and was developed within a study funded by ESA’s Support To Science Element as part of the Arctic+ cluster (see https://arctic-plus.inversion-lab.com/). The system will be applied to assess the added value of the product for the performance of forecasts for sea ice and snow for selected regions in the Arctic that are relevant for navigation.
The operational processing and delivery chain to the end user will be also developed as part of the project. The algorithms to be developed will be automated and executed within a workflow management platform, deployed in an operational environment. The end user delivery is achieved via the progressive web app IcySea; the content from the project is freely available under the URL https://icysea.app. To ensure the acceptance of nautical users the development of the IcySea user interface is driven by continuous feedback from ice navigators.
A demand for better operational sea ice forecast in the coastal areas is crucial both for the climate/marine researchers and for the commercial users. A new general circulation sea ice model neXtSIMv2 (Olason et al, 2021) has been developed at NERSC based on the brittle Bingham-Maxwell rheology framework to simulate realistic sea ice conditions for regions as large as the whole Arctic and over time scales up to several years. Sea ice deformation simulated with this model spontaneously localizes along linear-like faults separating essentially undamaged ice plates/floes.
Two novel SAR-based observations available in the Copernicus Marine Environment Monitoring Service (CMEMS) portfolio: sea ice type and sea ice deformation are assimilated in neXtSIM. Sea ice type is derived from Sentinel-1 SAR images using NERSC algorithm based on convolutional neural networks (Boulze et al., 2020). Ice type is used for computation of sea ice thickness as a function season and initialisation of ice thickness and edge in the model. High resolution SAR-based sea ice drift product delivered by DTU is used for computation of sea ice deformation components: divergence and shear. Ice deformation observations are crucial for detection of formation of sea ice leads and ridges affecting air-sea interactions, Arctic fauna and shipping activities. Deformation fields are interpreted as the model concentration and damage and are assimilated into neXtSIM.
The neXtSIM forecasts are validated against operational ice charts and satellite observations of sea ice type and deformation. The forecasting skills are improved on a 4 - 5 days horizon. Assimilation and validation procedures are integrated into the forecasting platform neXtSIM-F. Operational forecasts are provided to the French Hydrographic and Oceanographic Service (SHOM) and will be integrated into Copernicus Marine Environmental Monitoring Service (CMEMS).
Inspired by the external evaluation of the Cryo-Tempo sea ice product based on CryoSat-2 data, we present the user needs for satellite based sea ice information from the viewpoint of winter navigation and offshore operations. We then carry on to give examples of existing ESA, Copernicus, EUMETSAT and other data products that address those needs. Finally, we propose to compile a level 4 user oriented merged sea ice conditions product from existing ESA and other satellite products.
Need for sea ice information in areas where operations are planned usually revolves around both safety and the cost of operations. More severe the ice conditions, the more expensive the operations will be. Thus the users need information on the ice conditions to optimise their infrastructure for a given task in a given area. Furthermore, the International Code for Ships Operating in Polar Waters (Polar Code) states that the operating manual of ships should include information on ice conditions “with respect to periods during which the ship should be able to operate for intended areas of operation. Areas that pose particular problems, e.g. chokepoints, ridging, as well as worst recorded ice conditions should be noted.” In addition to the worst ice conditions mentioned in the polar code, it is common practice in the industry to give examples of extreme - both mild and severe - ice conditions as well as an example of an average winter. Thus our goal is to specify a product that contains necessary information for ice condition study and is easy to use, visualise and browse to find average and extreme conditions.
One of the key parameters for operations planning is the duration of ice free period in summer. This is easily derived from sea ice concentration data sets, such as the ESA CCI+ or OSI-SAF products. Another important variable is the extent of multiyear ice (MYI). Many ships are designed for operations in first year ice, but ships strong enough to cope with harder multiyear ice are both expensive and thus rare. In consequence flagging areas where and when multiyear ice can be encountered is important. There are several ice age products available, ie. from NSIDC and OSI-SAF. Interestingly, FYI/MYI classification is required in the freeboard to SIT conversion and in consequence the MYI fraction is available in the L2i Sea Ice Thickness CCI+ CRDP. In addition to MYI, also ice of land origin, that is icebergs, are a significant risk to operations. Thus an iceberg concentration product such as the one from CMEMS should also be included.
Ice speed is a major factor for operations in ice. This is especially relevant for stationary offshore structures where the loads against the structure are induced by ice speed. Furthermore, landfast ice poses different challenges to navigation than drift ice.
Finally, if available, thickness distribution will bring considerable added value to the analysis of ice conditions. Traditional ice charts contain the stages of development of ice within a polygon with their partial concentrations, which can be interpreted as thickness distribution. Importantly, sea ice thickness distribution together with ice age estimate allows the calculation of POLARIS Risk Index Outcome (RIO). Sea ice thickness estimates (for example the ESA CCI+ Level-2 CRDP) include thickness estimates along-track, from which thickness distribution can be calculated. Alas the limitation of all current radar altimeter based thickness products is that they do not cover summer months. Only estimates for October to April are available and thus the important melt season and open water season onset is missed.
We shall present a sample compilation of satellite derived sea ice products. We shall also consider the improvements to current products due to three future Copernicus High Priority Candidate missions CIMR, ROSE-L and CRISTAL emphasising that no one mission can address sea ice related user needs.
Sea ice thickness and volume are essential climate variables that critically contribute to the characterisation of Earth's climate. A long-term observation of sea ice thickness from space is furthermore an important contribution to the sea ice assimilation and prediction studies. Daily Arctic-wide thin sea ice thickness has been estimated from the brightness temperatures measured by the L-band radiometer on board of ESA’s Soil Moisture and Ocean Salinity mission (SMOS) since 2010. The data set provides valuable information over thin ice thickness distribution and variation over 12 Arctic winter periods and has been widely used for sea ice thickness inter-comparison studies and used as initialization of model simulations.
Due to the broad swath-width, SMOS has almost daily coverage in both polar regions. The large penetration depth of L-band in the sea ice layer makes it possible to extract information of sea ice thickness up to 1 m. The maximum retrievable ice thickness depends on the brine volume in the ice, which in turn depends on ice salinity and ice temperature. The data set has been compared to and validated with in-situ measurements, remote-sensing data, and sea-ice assimilation systems.
SMOS ice thickness retrieval shows less uncertainties for thin ice, however, loses sensitivity for ice thicker than 1 m. On the contrary, altimeter-based ice thickness observation from CryoSat-2 (CS2), which has been also in operation since 2010, shows large uncertainties for thin ice. Especially the uncertainty in snow estimation causes large uncertainties in the altimeter-based sea ice thickness retrieval. Both SMOS and CS2 ice thickness retrievals are strictly limited to cold periods and is not applicable during late spring and summer. Weekly ice maps have been produced combining the two satellite data sets using optimal interpolation scheme at Alfred-Wegener-Institute in the framework of ESA’s project “SMOS & CryoSat-2 Sea Ice Data Product Processing and Dissemination Service”. The data product is publicly available under https://spaces.awi.de/confluence/display/CS2SMOS and https://smos-diss.eo.esa.int/oads/access/.
Despite Synthetic Aperture Radar (SAR) being a prime source of sea ice information for operational ice charting services, automatic and semiautomatic algorithms that derive ice type or ice-water classes, and subsequently sea ice concentration (SIC), have been limited in their application. This is due to overlapping signatures from various ice types as well as open water in the SAR backscatter. Additionally, a varying sea surface state alters the SAR backscatter intensity from open water and thus further complicates the automatic separation of sea and open water areas. However, recent advancements in SAR sea ice classification algorithms have shown promise in their ability to handle this technical constraint. Consequently there is now scientific interest, such as through the SIRANO project (https://cryo.met.no/en/sirano), of merging SAR with Passive Microwave Radiometer (PMR) data to produce an accurate and high spatial resolution SIC dataset. Prior to this data merging, it is timely to first understand the quality of the SIC that can be produced purely from SAR satellites. Therefore this research firstly validates the SAR against high resolution multispectral imagery and secondly compares the SAR SIC to PMR data.
The research applies Sentinel-1 data with the latest SAR sea ice type algorithm (Lohse et al., 2020) to determine how the algorithm parameters impact the final SIC and identify if SAR can indeed produce a SIC product with a low measurement uncertainty whilst maintaining the benefits of high resolution imagery. Through adjusting algorithm parameters the SAR SIC is incrementally coarsened in pixel resolution with the aim of minimising the SAR backscatter ambiguities and consequently reduce the measurement uncertainty. This SAR SIC will be validated against a SIC derived from Sentinel-2 multispectral data and compared against PMR AMSR2 data to determine if the higher measurement uncertainty present in the SAR is attributed to its finer spatial resolution or is due to how the SAR derives SIC.
PMR data is vital for providing SIC data due to the strong contrast in emissivity of ice and water and the sub-daily imaging capabilities. As a result PMR is capable of producing SIC in the winter with a low measurement uncertainty of ~5% which will increase in areas of thin ice or when melt ponds are present. The main drawback of using PMR is its coarse resolution, at best 5km when using AMSR2 data with a single high frequency (89GHz) dual channel (89V & 89H) algorithm. This consequently means its assimilation into ice/ocean models is not optimal since the PMR data is coarser than the grid resolution of regional ocean/ice model forecasts, such as the Barents-2.5km model.
A SAR sensor such as Sentinel-1 has a much finer imaging capability, ∼93m × 87m when using the ground-range detected medium resolution (GRDM) product, this ensures greater surface homogeneity within a pixel. This means that SAR SIC can be derived by classifying pixels and then aggregating sea ice pixels within an area. SAR therefore has the potential of producing a SIC product that maintains finer sea ice details than the PMR. As previously mentioned however, SAR backscatter ambiguities cause issues when classifying images. Also present in the raw SAR imagery is speckle, a signal dependent granular noise that can impact the capabilities and performance of automated and semiautomated methods and must be handled correctly to produce accurate results. A promising SAR development for ice classification has been the application of treating the backscatter incident angle as an ice type class property and not an image property that has to be corrected during pre-processing (Lohse et al., 2020). Additionally the use of grey level co-occurrence matrix SAR texture features have also shown particular improvements in separating ice and water, and can be directly incorporated into the model for linear incident angle dependency (Lohse et al., 2021).
These methods are now applied to specifically derive SIC, since if handled correctly the SAR SIC has the potential to maintain finer sea ice details such as the ice edge and leads that the PMR is unable to resolve. By investigating the technical capabilities and limitations of SAR SIC provides the necessary knowledge required for later research that will then combine SAR and PMR SIC data.
References
J. Lohse, A. P. Doulgeris, and W. Dierking, “Mapping sea-ice types from Sentinel-1 considering the surface-type dependent effect of incidence angle,” Ann. Glaciol., pp. 1–11, Jun. 2020, doi: 10.1017/aog.2020.45.
J. Lohse, A. P. Doulgeris, and W. Dierking, “Incident Angle Dependence of Sentinel-1 Texture Features for Sea Ice Classification,” Remote Sensing, vol. 13, no. 4, Art. no. 4, Jan. 2021, doi: 10.3390/rs13040552.
Identifying ice types in the early stages of development from L-band SAR imagery remains an active research area during the Arctic freeze-up period. We used ScanSAR C- and L-band SAR imagery from RADARSAT-2 and ALOS-2 PALSAR-2, respectively, to identify ice types in the North Water Polynya (NOW) region. We investigated the HH-polarized microwave backscatter coefficient (σ), and its GLCM texture parameters for six ice classes and open water. We found very low σ for nilas at both C- and L-band. Although similar σ is found for grey ice at both frequencies, σ decreases with increasing ice thickness at L-band, whereas, at C-band, σ increases from grey to grey-white ice and then decreases as the ice grows. GLCM texture parameters show lower values for L-band than C-band; however, separability among classes was found only for a few selected parameters. We used the support vector machine (SVM) algorithm for ice type classification from SAR scenes using σ and GLCM texture statistics. We found higher classification accuracy at L-band (79%) than C-band (57%). Due to analogous σ signatures at C-band, early-stage ice classes are substantially misclassified. On the contrary, L-band could identify early-stage ice classes with higher accuracy; however, it misclassified thicker ice types and open water. Considering the contrasting responses between the two frequencies, we used a novel integrated SVM classifier, combining both frequencies, where open water and multi-year ice were classified with C-band, and early-stage ice classes are identified with L-band. Overall, the classification accuracy using the dual-frequency classifier improved substantially (94%). This research shows the value of the upcoming L-Band SAR missions in a warming Arctic where thinner ice has become the dominant ice type.
Sea ice is a critical component of the Earth’s climate system. It regulates the exchange of heat, moisture and momentum between the ocean and atmosphere, reflects incoming radiation through the albedo effect, and redistributes salt within the ocean. However, evidence of reducing Arctic sea ice extent has been observed since the satellite record began, coinciding with significant atmospheric warming. The Arctic is responding rapidly to rising global temperatures, due in part to the positive feedback cycle exemplified by melting sea ice. This will have serious implications for climate, local ecosystems and ship navigation.
Since the beginning of the satellite record, sea ice extent and concentration have been measured using passive microwave satellite retrievals, yet measurements of sea ice thickness were historically spatially and temporally limited by the availability of in-situ observations. However, developments in satellite altimetry - notably the success of CryoSat-2 - have transformed and advanced measurements of sea ice thickness in the Arctic. The CryoSat mission provides extensive coverage up to 88˚ N, presenting a consistent winter sea ice-thickness record for the Arctic Ocean. In addition, since 2018, the Sentinel-3A and –3B satellites have contributed to this record, greatly improving the spatial and temporal resolution when compared to using CryoSat-2 data alone. As a result, there is now over a decade of Arctic sea ice thickness and volume change measurements outside of summer months.
We investigate regional and decadal trends in Arctic sea ice thickness, combined with in-situ data from ships, to explore the drivers of sea ice loss in key sectors of the Arctic. We use point measurements from the Centre for Polar Observation and Modelling CryoSat-2 product, combined with measurements from Sentinel–3A and –3B, to explore space time variations in thickness and floe distribution in key regions. We focus on regional variations in ice thickness, and how these changes in sea ice will impact climate, ecosystems and the economy. A better understanding of the mechanisms that have driven historical retreat will improve future projections of sea ice loss and its implications.
Sea ice is an important indicator of change in the global climate system. The extent and concentration of Arctic sea ice cover has been measured with satellite retrievals from passive microwave sensors since 1978, providing a 40-year long record. Unfortunately, similarly long-term records of sea ice thickness are not available. Basin-scale thickness retrievals from satellite altimeters only go back to the early 2000s and represent a rather short record for examination of decadal changes. Moreover, satellite altimetry observations are poor near the coast, making observations in narrow channels and straits difficult.
Sea ice concentration, age, and floe size have been observed and recorded in ice charts by national ice services for decades. These ice charts are created for shipping and offshore construction purposes and have rarely been used for scientific reasons. As sea ice thickness is related to ice concentration, age and floe size, we can use these charts to create a sea ice thickness proxy product.
We’ve created a machine learning model that predicts sea ice thickness from information in the Canadian Ice Service ice charts. This model is trained on recent ice charts and CryoSat-2 sea ice thickness observations (2011-2021). We apply the model to estimate sea ice thickness from older ice charts, going back to the 1960s for the Canadian Arctic. This creates the longest record of large-scale Canadian Arctic sea ice thickness to date. The proxy-sea ice thickness product will be used to study long-term trends and variability in one of the fastest changing regions of the Arctic. This model also predicts sea ice thickness in areas that cannot be measured well by satellite altimetry, for example channels in the Canadian Arctic Archipelago, which will be validated with airborne field datasets.
There are more national ice services that create ice charts, including Norway, Denmark, and Russia. The next steps are to extend this machine learning model to apply to the ice charts from these other regions in the Arctic.
Traversing the Arctic waters safely and efficiently necessitates up-to-date charts of the constantly moving and changing sea ice conditions highlighting the contemporary sea ice extent, local concentration, and auxiliary descriptions of the ice conditions. For several decades, sea ice charts have been manually produced by visually inspecting and analyzing satellite imagery. This is a time-consuming and resource-intensive task. Deep learning and Convolutional Neural Networks (CNN) have shown promising results in automating this task. Benefits include reducing the production time to minutes from hours, increasing the number of produced ice charts, the covered area, and freeing up human labor.
Synthetic Aperture Radar (SAR) images are often used for sea ice charting due to their high resolution and the capability of acquiring images independently of clouds and sun illumination. However, the backscattering signatures of SAR are difficult to interpret and require trained ice analysts to describe the sea ice conditions. In addition, there are ambiguities in the backscattering signature between open water and sea ice. Examples are wind-roughened oceans and specific ice conditions, such as compact or landfast ice. Observations from other space-borne sensors are employed by the ice analysts when available and advantageous, including optical and passive microwave radiometer (PMR) observations. In optical imagery, the difference between bright white sea ice and dark blue open water is easily distinguishable but the dependence on Sun illumination and cloud-free conditions reduces the utility for operational sea ice charting. The microwave signatures of sea ice and open water in PMR observations are generally easily distinguishable, but the coarse resolution (typically tens of kilometers) limits its use for detailed sea ice charting and nautical navigation purposes.
For automatic sea ice charting models, it can be advantageous to utilize independent SAR data over data fusion models. This is primarily due to the simplification of the operational data pipeline that allows for ice chart production during periods when the secondary data source is not available. This ensures the production of the maximum number of ice charts and the quickest production time in cases where the model must wait for the secondary data source to be captured. However, standalone SAR models are only advantageous if the previously mentioned ambiguous electromagnetic signatures in the SAR images are solved with high reliability and certainty. The primary obstacles for standalone SAR models are; electromagnetic noise from the Sentinel-1 TOPSAR subswath transitions, adequate intermediate sea ice concentrations predictions, narrow fjords, and ambiguous scenarios during wind-roughened seas, and homogenous and landfast sea ice.
Here, we present advances, and remaining obstacles in automatic sea ice charting using CNN trained on standalone Sentinel-1 SAR from the AI4Arctic / ASID V2 AI-ready dataset. Approaches to address the before-mentioned obstacles are; applying an alternative SAR noise correction scheme developed by the Nansen Center, increasing the receptive field and depth of the applied U-Net model, selecting an appropriate loss function to reflect the proximity of classes during training, and adding auxiliary variables such as seasonal, and geographical information.
The Automated Polynya Identification Tool (APIT) is a machine learning based tool that aims to identify and define polynya formations in both space and time. These often small and short-lived phenomena are frequently undetected and are important for climate scientists to understand polar systems change. The APIT tool is a rapid, computationally efficient, low-cost and more time-efficient method for locating polynya formations relative to current in-situ surveying methods.
APIT is currently in early-development and at a prototype stage, where MODIS imagery is the only sensor to have been applied using the widely recognised Weddell Sea polynya from 2017 as a way of training the tool. The use of an optical sensor in the polar regions has been found to be limited due to the quantity of cloud cover present and polar seasonal day-light hours; therefore, going forward, this tool will look to integrate alternative Earth Observation data, including but not limited to Sentinel-1, Soil Moisture and Ocean Salinity (SMOS) and CryoSat-2. Next APIT development stages contemplate the use of other auxiliary datasets, including those from the Ocean and Sea Ice Satellite Application Facility (OSI-SAF) and the European Centre for Medium-Range Weather Forecasts (ECMWF), before implementing a machine learning detection process. Furthermore, for the provision of early-warning predictions, APIT will provide patterns and other oceanographic conditions taking place at different polynya evolutionary formation stages (i.e. before, during and after each event).
The deployment of APIT will not only contribute to climate science as a way of providing near-real time locality information of polynya openings, but will also act as an early warning system, using machine-learning algorithms alongside open-source near-real time data enabling the re-routing of research vessels to take in-situ measurements. Providing opportunities for field research to take place during the life cycle of a polynya will contribute to understanding the reasoning behind their formation, alongside the impacts these warm water openings have at local, regional and global scales.
The first proof of concept of APIT will be tested under Southern Ocean Freshwater project (SO Fresh), which is an ESA funded project (2021-2023). The APIT output will aid SO Fresh and the scientist at large studying polynyas in Southern Ocean.
Sea-ice roughness is an important parameter in the momentum, heat, and moisture transfer between atmosphere, ice, and ocean. For pack sea ice, the roughness of sea ice influences ice dynamics and susceptibility to be forced by winds and currents. Sea ice topography which determines sea ice roughness and drag coefficients can be derived from airborne laser altimetry data, which is limited in terms of spatial coverage. Attempts to map sea ice roughness on a pan-Arctic scale has thus far been limited to airborne data extrapolated across the Arctic using scatterometer satellite radar backscatter data as a proxy. The lack of satellite altimeter derived drag coefficients can be attributed to the low resolution (>1 km) and spatial coverage of radar altimeters. The new laser altimeter ICESat-2, operational since October 2018, has pushed the limits of altimetry with its 13 m footprint and 10 kHz pulse repetition frequency (corresponding to photon returns every 0.7 m). With this higher resolution satellite topographic data, topographic data and drag coefficients can be derived according to the existing parameterizations that have been used to derive drag from airborne topographic data. Based on such approaches the sea ice drag coefficients can be mapped on a pan-Arctic scale with monthly temporal resolution. Areas with relatively high drag coefficients can be identified by calculating form drag (drag associated to ridges) over 10 km orbit segments, which are then accumulated over a full month to achieve almost complete Arctic-wide coverage.
Despite the impressive resolution and accuracy of ICESat-2, the ATL07 processed sea ice height product that it offers still has a hard time observing all along-track ridges, a parameter that is central to the drag coefficient calculations. As such, we use the MOSAiC Airborne Laser Scanner (ALS) data, as well NASA's Operation IceBridge (OIB) to fine-tune the existing parametrization so as not to underestimate drag.
Currently, the temporal evolution of drag and sea ice roughness is being studied and compared with parameters like sea ice concentration and thickness. Correlations of this type would facilitate the integration of roughness parameters in the coupled atmosphere-ice-ocean model HIRHAM-NAOSIM, which is one of the end-goals of this project. With spatio-temporal sea ice roughness variability, the coupled model should be able to more realistically capture interactions within the coupled atmosphere-ice-ocean system.
Manual sea ice charting from multi-sensor satellite data analysis has for many years been the primary method at the National Ice Services for producing sea ice information for marine safety. Ice analysts primarily use satellite synthetic aperture radar (SAR) imagery due to the high spatial resolution and the capability to image the surface through clouds and in polar darkness. Auxiliary satellite observations, including optical imagery and passive microwave radiometer (PMR) observations, are used when available and advantageous. Optical imagery, however, requires a clear sky and daylight conditions, and ice analysts mention the coarse spatial resolution of microwave radiometers as a significant limitation of the utility of PMR observations for sea ice charting.
The traditional process of manually drawing sea ice charts is time-consuming, and therefore the number of ice charts produced on a given day is limited. With a growing number of users accessing wider parts of the Arctic due to the thinning of the Arctic sea ice, along with the ever-increasing body of readily available satellite data, the manual interpretation of these data becomes a laborious task. Furthermore, there is a time delay between data acquisition by the satellite and the delivery of an ice chart to users, which reduces the value of the information in the ice chart - especially, in regions in which the sea ice conditions are particularly dynamic.
The automation of the time-consuming and labour-intensive sea ice charting process has potential to provide users with near-real time sea ice products of high spatial and temporal resolution, covering a larger geographical area, and with increased consistency. Advances in deep learning and computing technology over the last decade have paved the way for the use of sophisticated computer vision techniques for the automatic analysis of high resolution satellite imagery.
Here, we present a carefully designed Convolutional Neural Network (CNN) to fuse high resolution Sentinel-1 SAR imagery and PMR observations from AMSR2 to generate maps of the sea ice in Greenland waters. Automating the sea ice charting process on SAR imagery alone is challenging. SAR images show patterns related to ice formations, but backscatter intensities are often ambiguous, which complicates the discrimination between sea ice and open water, e.g. at high wind speeds or for certain ice conditions. Our CNN model tackles this obstacle by fusing the Sentinel-1 SAR imagery with AMSR2 PMR observations to exploit the advantages of both instruments - that is, the high spatial resolution of the SAR imagery, and the reliable discrimination of sea ice and open water in the PMR observations. Our CNN model is a multi-tasking model that generates maps of several sea ice parameters simultaneously; sea ice concentration, sea ice stage of development (type) and sea ice form (floe size). Our CNN model has been trained on a large dataset of 461 ice charts manually produced by the ice analysts at the Greenland Ice Service at the Danish Meteorological Institute (DMI) based on Sentinel-1 imagery. The dataset also contains the corresponding AMSR2 swath co-located with the ice charts and Sentinel-1 images. The sea ice training dataset (https://doi.org/10.11583/DTU.13011134.v2) has been produced in the ESA-funded AI4Arctic project. The model is currently in production at DMI (http://ocean.dmi.dk/asip/).
We will present the results of merging active and passive microwave data from Sentinel-1 and AMSR2 as input to our CNN model for sea ice mapping in Greenland waters, and show how the passive microwave observations help resolve ambiguities in the SAR imagery, while maintaining high spatial resolution.
The thinning of sea ice and the reduced sea ice cover imply that more sea areas will become available for maritime traffic in the Arctic, where newly formed sea ice areas and leads provide safe routing for ship traffic and cost-effective passage through ice. In the Barents Sea, most of the sea ice is formed locally, with a fraction imported from the Arctic Basin through the straits between Svalbard and Novaya Zemlya. New ice formation takes place during a large part of the year, both in the marginal ice zone (MIZ) and within sea ice leads.
A comprehensive high-resolution monitoring of new ice formation is possible using Synthetic Aperture Radar (SAR), due to its all-night and weather capabilities as well as its superior resolution in comparison with other large-scale monitoring instruments such as microwave radiometers. Sentinel-1 imagery is freely available from ESA and the Extra-Wide Swath (EW) mode at 100 m resolution provides a near daily coverage of the Arctic. SAR images cannot be used to derive absolute sea ice thickness estimates, but a qualitative distinction between “new” and older ice can be made by observing surface properties of the former vs. the latter. The smooth surface of new ice results in a very low backscatter signature on SAR relative to the surrounding thicker sea ice area, an advantage for visibility but a disadvantage when taking into account the proximity of measured values to noise floor values, as well as confusion with other targets with similar signatures (“lookalikes”, i.e. oil spills, natural oil seeps, rain cells, low winds and others).
Mapping the occurrence of newly formed sea ice and lookalikes in the Arctic waters on a seasonal basis would add to our understanding and knowledge of these phenomena. Awareness of the major locations of newly formed sea ice, but also oil spills, are important for operational services and their effort to reduce false alarms.
We have developed a robust algorithm for detecting low-backscatter areas representing new ice and lookalikes, focused on the Sentinel-1 EW mode and its specific noise corruption problems which include stitching artefacts at the sub-swath boundaries. Based on a statistical mixture model, the algorithm uses intensity values coupled with incidence angle values (to account for the intensity decay from near- to far-range) and noise estimates and is thus able to identify the low backscatter areas as one segment across all swaths – while also limiting noise artefacts, in particular at the boundary between swath 1 and swath 2.
We show results obtained from processing approx. 20 scenes acquired in the Barents Sea during the freezing season (November - April), thus demonstrating the potential of performing large-scale operational monitoring of the above-mentioned targets in the Arctic Ocean. Tuning a number of input parameters remains the only operator-driven step before running the algorithm, a step that can be optimized. Once completed, the task can be fully automatized on a Pan-Arctic scale.
The follow-up to this work will be to investigate the separation between thin ice and its lookalikes, potentially using polarimetric features from co-located quad-polarimetric SAR imagery, ocean surface temperature data and passive microwave.
Sea-ice drift is a key variable for understanding sea ice in a changing climate, and an Essential Climate Variable (ECV) product for the Global Climate Observing System (GCOS). In the Arctic, sea ice has been reported to drift faster (e.g. Rampal et al., 2009), concomitant with its reduction in area, general thinning, and loss of multiyear ice. In the Antarctic, trends in sea-ice drift have been linked to trends in wind patterns (e.g. Hollands and Kwok, 2012). Sea-ice drift is In addition important for operational forecasting, for example plotting safe navigational routes for polar ships. Sea-ice drift vectors have traditionally been derived at large scales from passive microwave (PMW) and scatterometer imagery, and at finer spatial resolution from Synthetic Aperture Radars (SAR). A sparse network of on-ice buoys provides ground-truth for validation.
In this contribution, we present a new global 30-year Climate Data Record (CDR) of sea-ice drift vectors from 1991 to 2020. It uses the continuous maximum cross-correlation technique (CMCC) for measuring sea-ice drift, by matching features in a pair of satellite images (Lavergne et al., 2010). During summer, this technique becomes far less accurate due to surface melting and higher atmospheric humidity. We therefore employ a computational free-drift model of the ice to fill the data gaps in the summer. This model calculates the ice drift based on wind vectors from the ERA5 wind reanalysis, under the assumption that the internal stresses of the ice can be neglected. We describe the algorithm baseline for the new CDR as well as results of validation against buoy trajectories, with a focus on the temporal consistency across the satellite missions. This CDR was created in the context of the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF).
We additionally present the results of an investigation into calculating sea-ice drift vectors from pairs of individual swaths of PMW missions rather than from daily gridded satellite imagery, today’s status-quo. When applied operationally, the method of calculating sea-ice drift vectors on Level 2 swath data will have several advantages for operational application, including shorter latency, more sea-ice drift vectors, and better accuracy of the product. This study (Lavergne et al., 2021) was undertaken in preparation for the Copernicus expansion mission Copernicus Imaging Microwave Radiometer (CIMR).
We finally discuss outlooks for sea-ice drift monitoring for space, e.g. merging the information from passive microwave missions like CIMR and that from SAR, e.g. Sentinel-1NG and ROSE-L.
References:
Holland, P., Kwok, R. Wind-driven trends in Antarctic sea-ice drift. Nature Geosci 5, 872–875 (2012). https://doi.org/10.1038/ngeo1627
Lavergne, T., Eastwood, S., Teffah, Z., Schyberg, H., and Breivik, L.-A. (2010), Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic, J. Geophys. Res., 115, C10032, doi:10.1029/2009JC005958.
Lavergne, T., Piñol Solé, M., Down, E., and Donlon, C.: Towards a swath-to-swath sea-ice drift product for the Copernicus Imaging Microwave Radiometer mission, The Cryosphere, 15, 3681–3698, https://doi.org/10.5194/tc-15-3681-2021, 2021.
Rampal, P., Weiss, J., and Marsan, D. (2009), Positive trend in the mean speed and deformation rate of Arctic sea ice, 1979–2007, J. Geophys. Res., 114, C05013, doi:10.1029/2008JC005066.
With the rapid decrease in summer sea ice extent and thickness in the Northern Hemisphere, sea ice monitoring has gained interest over the past decades. The decline in sea ice offers new possibilities for offshore operations and marine traffic in the Arctic Ocean. These human activities require accurate and reliable ice charts, which are produced by national ice services on a regular basis. At present, ice chart production is performed manually by expert ice analysts, based primarily on the analysis of synthetic aperture radar (SAR) data. Producing an ice chart is a time-consuming task that is prone to subjectivity. On the other hand, the amount of available satellite data is continuously increasing, which raises the need for (semi-)automation in sea ice mapping.
Multiple fully automated methods for sea ice classification have been successfully proposed. These fully automated methods, however, suffer from two major bottlenecks. Firstly, they often need a lot of accurately annotated data for training, which can be difficult to obtain for sea ice. Secondly, the models need to be re-trained when ice class properties change, e.g. for different seasons, which complicates the direct use of these methods in operations. Semi-automated methods can overcome both these bottlenecks, and furthermore allow for the mixed use of human expertise and automation, which makes the step towards operational use easier.
In this work we therefore apply a semi-automated method [1] for the classification of sea ice types. We believe a semi-automated model fits very well to the sea ice classification task, because semi-automated models take into account all the available data and not just the labeled data. Since sea ice remote sensing data sets are often scarcely labeled, this is a major advantage.
In recent years, the increasing number of satellites has opened up the possibility to obtain overlapping images acquired by different sensors. For the case of sea ice observations, it has been shown that in particular SAR data acquired at C- and L-band frequency can provide complementary information about the surface. Various studies have demonstrated that the combination of co-registered C- and L-band images can resolve some of the ambiguities inherent in single-frequency SAR data, and hence significantly improve classification accuracy. For this reason, we evaluate our method on, among others, the ICESAR data set, which contains airborne, overlapping C- and L-band images acquired in Framstrait in 2007 [2].
We propose and evaluate a scheme for semi-automated classification of sea ice types using multimodal remote sensing data. The main contributions are the following: 1) the method is versatile and solves the aforementioned issues of fully automated methods, and 2) the method can easily accommodate any combination of modalities and exploit their complementary information.
Different remote sensors often acquire data of different types, which results in a heterogeneous data set. Heterogeneous data is a major challenge in multimodal data analysis. To deal with this, our method represents the data as a graph. Data points become nodes in the graph and the edges reflect the degree of similarity between pairs of nodes, which is calculated using a similarity measure. Since the method is semi-automated, a small subset of the nodes has an initial label. These initial labels together with the information contained in the graph structure are used by the algorithm to predict a label for all the unlabeled nodes. The process of propagating labels through a graph to predict a label for the unlabeled data points is a well-known semi-supervised learning strategy called label propagation. More formally, the problem setup can be described as follows: "Given a multimodal set of data points (pixels), of which a small subset has been labeled by an ice analyst, predict a label for each previously unlabeled data point".
Figure 1 illustrates the initial state of the graph before label propagation. Colours are used to indicate the different classes a node (pixel) can belong to. Part of the nodes are unlabeled (white), and the goal is to propagate the initial labels throughout the graph to label all unlabeled nodes.
Experiments are conducted on multiple sea ice data sets, originating from a range a different imaging sensors. First experiments on overlapping C- and L-band images for the classification of sea ice types show promising results.
We further investigate: 1) single sensor versus multi-sensor classification performance, 2) classification performance for different sensor combinations, and 3) robustness of the method regarding changing/varying ice class properties.
With this study we demonstrate the potential of semi-automated classification for sea ice types using multimodal data. The method contributes to the effort of optimally exploiting the complementary information present in multimodal data, which is crucial regarding the upcoming ALOS-4, NISAR and ROSE-L missions.
[1] C. Taelman, S. Chlaily, E. Khachatrian, F. van der Sommen, A. Marinoni. “ On the Exploitation of Heterophily in Graph-based Multimodal Remote Sensing Data Analysis“. 30th ACM international conference on information and knowledge management (CIKM): Workshop on Complex Data Challenges in Earth Observation, 2021. Accepted for publication.
[2] Dierking, W. Technical Assistance for the Deployment of Airborne SAR and Geophysical Measurements during the ICESAR 2007; Final Report—Part 2: Sea Ice; ESA-ESTEC: Noordwijk, The Netherlands, 2008.
Sea ice altimetry missions have currently been measuring the sea ice freeboard and thickness in ice covered regions for the past 30 years (1993-present). Evaluation of the satellite derived sea ice freeboards, thicknesses and related snow depth products are a crucial step for better understanding the quality of the satellite derived products and to support improvements of these. However, reference measurements are still sparsely distributed in the Arctic and even sparser in the Antarctic due to the harsh environment and cost-expensive access.
Here we present an overview of available reference measurements from multiple sources (airborne campaigns, ship and submarine cruises, moorings as well as in situ) for the evaluation of satellite altimetry derived sea ice freeboard and thickness products. The presented evaluation data set (the Round Robin Data Package (RRDP)) is collected and prepared as part of the ESA CCI Sea Ice projects to assist the evaluation and algorithm development of the dedicated satellite derived CCI Sea Ice freeboard and thickness product (1993-present).
In this presentation we will provide an overview of the temporal and spatial coverage of the reference measurements, included in the CCI RRDP, from different sources throughout the altimetry era 1993-present for both hemispheres. We describe the steps taken from the raw reference measurement to a form where it can be compared directly to the satellite derived sea ice freeboard and/or thickness. This includes, methods on how to best compare the measurand of the respective reference measurement (e.g. sea ice draft from upward looking sonars mounted on moorings or total thickness from airborne EM soundings) to the satellite derived sea ice freeboard and thickness, sampling strategies for upscaling the reference measurements from local and regional to satellite scales, together with uncertainty estimates of the final produced reference measurement provided in the CCI RRDP. Through examples of collocated reference measurements and satellite derived data we will further address advantages and limitations with respect to the different reference measurements. Based on this study, we will finally conclude the presentation, by addressing the need for dedicated time series (Fiducial Reference Measurements) to support cross-evaluation between different altimetry missions, in support of consistent long-term climate records, and to assist future Copernicus expansion mission CRISTAL.
In September 2019 the largest year-round ship expedition entitled Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) started its expedition to investigate the Arctic, an epicentre of global warming. The aim was to observe and gain fundamental insights of key parameters to better understand climate change. During MOSAiC, many different measurements and observations were made, including helicopter-borne thermal data and laser scanner observations, which can capture the thermal signatures and topography of the underlying sea ice cover.
From the thermal images acquired during MOSAiC, one can easily distinguish the sea ice leads – cracks in the sea ice created by divergent motion – but other features were to some extent also observed, such as sea ice ridges (caused by convergent motion of the sea ice). The aim of this study is to investigate how thermal signatures correlates with laser topography data acquired either by the same platform or from a satellite. The purpose is to investigate how well the thermal signatures correspond to the topography data, and especially how specific sea ice features such as leads and ridges behave in the thermal spectra.
Here, we present a first comparison between helicopter-borne thermal maps and laser scanner data acquired during the MOSAiC expedition and compare with near-coincident high-resolution satellite topography data acquired by ICESat-2. The study is two-fold: investigate how thermal signatures correspond with elevations obtained from space and specifically, how they correspond with identified sea ice ridges from the ICESat-2 data (using the University of Maryland – Ridge Detection Algorithm, UMD-RDA). In addition, a comparison with the airborne laser scanner topography data will be made with the satellite-derived elevations. Furthermore, a comparison between the topography and thermal data with the ICESat-2 observations over leads will be made to investigate how well ICESat-2 identifies the leads. A specific focus will be on the ability of ICESat-2 to identify not only specular leads, but a variety of leads – and how these compare with helicopter-borne thermal and topography data.
Over Arctic sea ice and land ice, a large validation effort of altimetry mission data have been done over the past two decades with collection of airborne and ground observations of key parameters of snow and ice properties such as snow depth, ice and snow densities, ice thickness and type, snow, firn, ice layers etc. In contrary, very few dedicated validation campaigns have been carried out over Antarctic ice relevant for altimetry missions. In the austral summer 2017/18 the European Space Agency funded a CryoSat validation campaign (CryoVEx) over the sea ice in the Weddel Sea, over the Ronne and George VI ice shelves, as well as over parts of the Antarctic ice sheet with a combination of dual frequency airborne altimetry and ground snow/firn observations carried out by a joint British/Danish team from BAS/UCL/U. of Leeds and DTU Space. These were also coordinated with the cruise by the supply vessel Schackleton hosting scientists doing sea ice measurements at selected sites on-route from drifting ice stations. These in situ observations included a ground penetrating wide band radar (GPR) capable of operating at similar frequencies as the airborne instruments.
Here, we present a study of Antarctic sea ice using coincident and con-temporal in situ GPR/snow stratigraphy and airborne triple frequency altimetry (Ku-, Ka-band and NIR laser). We use data from this unique collection of data to characterize the radar scattering properties of snow on Antarctic sea ice. Surface data is used to optimize the re-tracking of the Ku- and Ka-band altimetry for snow depth estimation. The resulting snow depth estimates from the surveyed transects are then compared to other information sources such as snow depth from passive microwave radiometry (MWR) and collections of surface observations from research and supply ship cruises – though these have limited spatial resolution and coverage in space and time.
Combining the local and regional datasets of sea ice properties is a first step towards characterizing the sea ice radar scattering properties for dual frequency altimetry and hence prepare for validation of future altimetry missions such as CRISTAL.
The objectives of the CryoSat ThEMatic PrOducts (Cryo-TEMPO) ESA project are to operationally distribute thematic CryoSat-2 products based on the state of the art for sea ice, polar oceans, land ice, coastal areas and hydrology. In this presentation we focus on the ice pack and polar oceans.
We propose to show successively the progress already made during the first phase of the project, the planned evolutions and the studies in progress.
The first part of the project sought to make the best use of the state of the art for each of the two themes considered here: the measurement of sea ice freeboard and thickness on the one hand, and the measurement of polar ocean Sea Level Anomaly on the other.
As the pack ice is inseparable from the ocean on which it rests, we are seeking, in the second part of the CryoTEMPO project, to unify, as far as possible, the altimetric treatments relating to the measurement of the water height. Indeed, measuring the freeboard of the pack ice necessarily involves measuring the Sea Level Anomaly.
Due to the complexity of radar waveforms on pack ice, operational retrackers are of the heuristic type. On the other hand, the measurement of sea level anomalies requires a parameter, the Sea State Bias, which can only be obtained by means of physical model-based trackers.
In this study, we propose to compare the cross-effects of the heuristic retracker TFMRA and the physical retracker SAMOSA+ on the SLA and the FB. We will also look at the effect of applying the sea state bias and the choice of the mean sea surface (MSS) reference between DTU15 and DTU21
By combining these 3 parameters (the retracker, the SSB and the MSS) we have 8 variants for the SLA and the FB whose comparisons are the subject of this study. For the SLA analysis we considered the 7 months from October 2013 to April 2014. For the effect on the freeboard the 11 years of CryoSat-2 are considered. The quality criteria for the SLA are the mean level and the standard deviation (std), both of which should approach zero. For the FB, we compare it to several series of in-situ measurements including the airborne Operation Ice Bridge (OIB) and the Beaufort Gyre (BGEP) moorings. We consider here all the SAR and SARIN measurements.
The Copernicus Sentinel-1 A and B satellites have greatly improved the capabilities of operational sea ice monitoring by its regular provision of free and open synthetic aperture radar (SAR) data over the polar areas. Since their launch in 2014 and 2016, respectively, they have been a prime data source used by sea ice centers around the world in their daily production ice products. The vast amount of data they provide has also revealed the need for automatic methods for generating sea ice maps from SAR data, these maps are currently made more or less manually by sea ice analysts.
Generation of ice charts is about labelling ice types and discriminating between ice covered surfaces and open water in SAR scenes. These tasks are often assisted by classification methods, which currently are based on classical statistical algorithms or traditional (shallow) machine learning models. In recent years, large efforts have been directed towards development and evaluation of deep learning (DL) architectures for sea ice classification. However, DL models often have many convolutional neural layers and hence a large number of weights that needs to be learnt. This requires very large training data sets.
In sea ice classification, training data is scares and unreliable with respect to type labels, and the generation is time-consuming and requires expert knowledge. Therefore, the scarceness of training data is one of the most challenging issues in the use of DL algorithms in this application. This work addresses this problem and presents a novel deep semi-supervised learning (SSL) method for sea ice classification based on Sentinel-1 EW-mode data. Semi-supervised learning models are becoming increasingly important because they have the capability of combining the data carefully labeled by analysts with a large amount of available unlabeled data in order to improve the predictive performance of DL networks. The presented architecture consists of two interacting, but separate models, namely a Teacher model and a Student model. The Teacher model is trained on the limited labeled data, and based on manifold assumptions of the feature space of the CNN, pseudo-labels for the unlabeled data are generated using a label propagation method. We show the performance of the proposed semi-supervised approach in sea-ice classification using a training data obtained from 31 Sentinel-1 images acquired North of Svalbard and discuss its benefit in other remote sensing applications.
Arctic sea ice has changed over recent decades, with less extent, and with being thinner, younger and drifting faster than earlier. Within the Norwegian project “Nansen Legacy”, three ship expeditions with RV Kronprins Haakon took place in the Arctic Ocean in winter, spring and late summer 2021. One of the goals of the expeditions was to better quantify status and changes of Arctic sea ice in the areas northern Barents Sea (winter and spring cruises) and central Arctic Ocean (late summer cruise). During the expeditions observations were performed by surveys from satellites, from air and on the sea ice surface, and snow and sea ice was sampled. For classifying sea ice into types on regional scales, we used synthetic aperture radar (SAR) satellite observations, including fully-polarimetric high resolution RADARSAT-2 and dual- polarimetric TerraSAR-X data (co-polarization channels) images, as well as dual-polarimetric Sentinel-1 (HH + HV) images. In total five RADARSAT-2 and one TerraSAR-X images were analyzed for the late summer cruise, and ten RADARSAT-2 and one TerraSAR-X images for the winter and spring cruises. Both TerraSAR-X images overlap in space and time with the RADARSAT-2 images. Airborne surveys provided ice thickness data, measured with a helicopter-towed electromagnetic instrument (“EM-bird”), and photography from both helicopter and a drone, by this adding more detail to the information from satellite observations. The on-ice surveys with instrument-sledge transects resulted in high resolution floe-scale ice and snow thickness datasets, and information on snow and sea ice physical properties was obtained by snow pits and collection of snow and ice samples. Currently ongoing ice core analysis will give insights about ice types, sea ice floe developed and age, while snow sample analyses will help us to understand how the snow cover affects both thermodynamic processes and satellite remote sensing. In addition, we collected standardized ship-based ice observations along with regular photography, and we stored ship-based radar observations. This information helps us to understand transitions in ice regimes between ice stations and aids the interpretation of the SAR satellite scenes. In the research area, sea ice changes were observed to occur relatively fast even within individual seasons, leading to different sea ice conditions on short temporal and spatial scales. Here, preliminary results from the observations and measurements from the three cruises will be presented and synergies discussed, along with an outlook on the next steps in data analysis and integration.
In contrast to land ice, the Arctic sea ice is highly variable due to the drift and the seasonal freeze-melt cycle, causing significant seasonal and interannual changes. The near surface air temperature is the main controlling factor for thermodynamic growth in the Arctic, while wind is the main driver for dynamic growth.
The sea ice thickness distribution is a result of the interaction between dynamic and thermodynamic processes. Divergent motion of sea ice stimulates thermodynamic ice growth, while convergence retards thermodynamic growth. Contrarily, thinner/thicker sea ice tends to be more/less mobile.
Since the last decade, satellites allow to routinely measure sea-ice thickness from space mainly using radar altimetry. Satellite altimeters, such as onboard ESA’s CryoSat-2, sense the height of the ice surface above the sea level, which can be converted into sea-ice thickness. But relative uncertainties associated with this method are large over thin ice regimes. In contrast, ESA’s SMOS satellite radiometer measurements have proofed to be capable of measuring thin ice thickness and are therefore suitable to monitor the beginning of ice growth.
The complementary nature of Arctic sea-ice thickness data records from ESA's Cryosat-2 radar altimeter and SMOS radiometer qualify the merged CryoSat-2/SMOS (CS2SMOS) product to be used to evaluate sea ice growth across the entire Arctic. However, for the interpretation of observed changes in sea ice growth in the context of changing atmospheric parameters, it is crucial to discriminate between dynamic and thermodynamic ice growth.
Here we present a synergetic approach to estimate dynamic and thermodynamic winter sea ice growth across the entire Arctic, using set of state-of-the art satellite products such as Cryosat-2/SMOS (CS2SMOS) sea ice thickness data, together with satellite ice concentration and drift products derived from passive microwave and scatterometer data. We inform about recent changes of ice growth in Arctic sub-regions from the marginal seas such as the Barents Sea, up to the North Pole and the Last Ice Area. We will evaluate our thermodynamic and dynamic growth products using in-situ measurements, such as buoys, upward-looking sonars and data obtained during the Multidisciplinary drifting Observatory for the Study of Arctic Climate expedition (MOSAiC).
Warm air intrusions can significantly alter sea ice concentration products derived from satellite-based microwave radiometry.
During the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition, two warm moist air intrusions reached the research vessel in mid April 2020 and marked the start of spring after a long period of stable winter conditions. The events resulted in an underestimation of retrieved sea ice concentration from satellite measurements by up to 30% for algorithms based on polarization differences of the microwave brightness temperatures.
We show that this underestimation cannot be explained by atmospheric effects alone.
The extensive supporting in-situ snow and ice measurements from MOSAiC offer the unique possibility to have simultaneous ground-based measurements of surface and atmosphere complementing the microwave radiometer measurements from satellite and from the ground.
Combining now the ground-based radiometer observations at frequencies 6.9 GHz, 10.7 GHz, 18.7 GHz and 89 Ghz, and snow and ice measurements including micro-computed tomography data, snow pit measurements and terrestrial laser scan data, we model the brightness temperatures of the surface. We show that the drop in the retrieved sea ice concentration can be attributed to large-scale surface glazing during the warming event. This glazing led to the formation of a thin ice crust at the top of the snowpack after the event, which has a strong impact on sea ice concentration retrievals based on the polarization difference of frequencies at 36.5 GHz or 89 GHz. However, the low frequency channel at 6.9 GHz turns out to be more robust with regard to these surface changes. Upcoming satellite missions like the Copernicus Imaging Microwave Radiometer (CIMR) will provide measurements at 6.9 GHz at a much higher spatial resolution (11 km by 19 km) than current satellite sensors. A multi-frequency approach including that channel is a promising candidate for future retrievals of sea ice concentration.
The Chinese-French Oceanography Satellite (CFOSAT) was launched on October 29, 2018 (00:43 UTC). The goal of this joint mission is to monitor the ocean surface winds and waves to provide combined data for ocean and atmospheric applications. Sea ice monitoring is one of the applications. The wind scatterometer, briefly called CSCAT, onboard of CFOSAT is the first Ku-band rotating fan-beam scatterometer with dual-polarization capability. This unique architecture leads to a dynamic geometry distribution of the backscatters across the swath (Li et al. 2021) and this is also our motivation to adapt the Bayesian sea ice detection method (Belmonte Rivas and Stoffelen 2011) for CSCAT backscatters. The ice model is adapted to incidence angle-dependent, and the probability distribution of the backscatters given ice differs for outer swath and sweet nadir swath.
1. Algorithm description
Scatterometers are active microwave sensors primarily designed for determining ocean surface wind vectors. They are also used for the detection of sea ice (Belmonte Rivas and Stoffelen 2011; Belmonte Rivas et al. 2012; Otosaka, Rivas, and Stoffelen 2018). The CSCAT sea ice detection algorithm proposed here is a modified version of an existing algorithm developed for rotating pencil-beam QuikSCAT.
The Bayesian posterior sea ice probability is formulated as:
p(ice│σ°)=(p(σ°|ice)p0 (ice))/(p(σ°│ice) p0 (ice)+p(σ°│wind) p0 (wind)) (1)
The conditional probability functions are expressed using Maximum Likelihood Estimations (MLEs) as normalized measures of distance from the observed backscatter to the ocean wind and sea ice GMFs:
p(σ°│wind)=p(MLE_wind) (2)
p(σ°│ice)=p(MLE_ice) (3)
The probability distribution of ocean backscatter p(σ°│wind), which as described in (2), can be derived with MLE_wind and MLE_wind is defined as the squared distance of measurements to the ocean GMF divided by the expected noise variance (Stoffelen and Portabella 2006). The same is also applicable to p(σ°│ice).
MLE_wind=∑ [σ°_i - σ°_(wind,i)]^2 / var[σ°_(wind,i)] (4)
MLE_ice=∑[σ°_i - σ°_(ice,i)]^2 / var[σ°_(ice,i)] (5)
where i is the index of the number of backscatters in one WVC and N is the total number of backscatters. A detailed explanation of the above equations can be found in (Belmonte Rivas and Stoffelen 2011).
1.1 Probability distribution of p(σ°│wind)
MLE_wind is expressed as a sum of N squared normally distributed random variables. Its pdf is illustrated in Figure 1 (blue line). It can be modeled as an inverse gamma distribution (Figure 1 orange line):
p(σ°│wind)=x^(-α-1)/Γ(α) exp(-1/x) /scale (6)
where x=(MLE_wind-loc)/scale, loc=-0.22, α=4.44, scale=4.81.
1.2 Probability distribution of p(σ°│ice)
The actual distribution of the sea ice backscatters in the space of CSCAT measurement is needed to estimate the ice model and its error variance. To include relatively more pure ice backscatters from CSCAT, the following constraints are applied: 1) Jan to March Arctic (latitude > 70°); 2) sst < -1°.
As we know that the backscatter from a sea ice surface is azimuth invariant, so it conforms to a one-dimensional straight-line model with one independent variable sea ice brightness (or proxy sea ice age). The rotating fan-beam architecture leads to various incidence and azimuth angles associated with the backscatters. Since the sea ice model is azimuth invariance, we focus on the influence brought by the incidence angle variety. The straight-line ice model is defined as σ°_(V, ice)=σ°_(H, ice)×slope+offset. Figure 2 gives the daily slope as a function of incidence angle in the Arctic. It does not only show the seasonal dependence, but also the incidence angle dependence with a maximum difference of 0.4. To ensure the uniformity of the linear Ku-band sea ice GMF, the winter period from Jan to March is used to establish the ice model. Figure 3 gives an illustration of the sea ice model as a function of incidence angle, which the variation of the model can be seen clearly. The distribution of sea ice backscatter distances to the linear ice model is Gaussian with a standard deviation various with incidence angle, the backscatters with low and high incidence angles are with higher standard deviation (Figure 4). According to the characteristics of the dynamic geometry across the swath (Li et al. 2021), the outer swath contains only high incidence angle, the sweet swath contains more diverse incidence angle and the nadir swath contains a full range of the incidence angle. Moreover, the number of VV and HH backscatter pairs is also different. These special features lead to a quite different MLE_ice pdf across the swath, especially for the outer swath. Hereby, MLE_ice pdf is estimated into two groups, one group is outer swath, and sweet nadir swath is the second group. As shown in Figure 5, the p(σ°│ice) is defined as chi-square distribution with different parameters for outer swath and sweet nadir swath:
p(σ°│ice)=x^(k/2-1)/(2^(k/2) Γ(k/2)) exp(-x/2) (7)
with x=MLE_wind-loc. For outer swath, k = 3.35, loc = -0.1; for sweet nadir swath, k = 1.5, loc = -0.2.
The algorithm described here is implemented and compared with ASCAT ice result, which shows a satisfying outcome. More validation with passive microwave instrument will be shown in the symposium.
2. Conclusion
The unique feature of CSCAT is the rotating fan-beam, and it leads to a very diverse geometry across the swath, which the current Bayesian sea ice model for pencil-beam and fixed fan-beam scatterometers cannot fulfill the condition. An adapted ice model is developed, it is incidence angle-dependent, and also takes the diverse geometry across the swath into account. There is two probability of the backscatters given ice are developed for outer swath and sweet nadir swath separately because the incidence angle distribution and the number of VV and HH backscatter pairs are various across the swath. The result is validated with ASCAT data gives satisfying outcome. The other validation with passive microwave instrument will be also presented in the symposium.
Multiyear ice (MYI) cover in the Arctic has been monitored for decades using increasingly sophisticated remote sensing techniques, and these have shown a significant decline over time. However, such techniques are unable to differentiate between the processes affecting the evolution of the MYI. Further, estimating the thickness of MYI remains challenging, meaning that the links between area and volume loss of MYI are still not clear. We employ the neXtSIM sea-ice model, coupled to the ocean component of NEMO, to investigate the changes to MYI over the period 2000-2018. We exploit the Lagrangian framework of the sea ice model to introduce a new method of tracking MYI area and volume, which is based on resetting MYI during freeze onset each autumn. By using a physical property to reset MYI at the end of summer, we aim to capture the point at which MYI has undergone the physical changes during the summer which differentiate it from first year ice. It also allows for spatial and interannual variations in refreezing, which, if not considered, can lead to an overestimation of MYI. The model is found to successfully reproduce the spatial distribution and evolution of observed MYI extent from the OSISAF ice type product. We discuss the balance of the processes - melt, ice convergence/export, and replenishment - that are linked to the general decline in MYI cover over time, and find that melting is the biggest loss of area and volume on an annual basis and is the primary driver of interannual variability. We also investigate were key to the significant observed declines in 2007 and 2012. We find that sea ice dynamics play a vital role in both the seasonal and interannual changes in the MYI cover, particularly in 2007. Notably, we illustrate that convergence of the ice can result in large reductions of MYI area without a corresponding loss of MYI volume. This means that neither the volume, nor the processes linked to its evolution, can easily be obtained from the MYI area. This highlights the benefits of using models alongside satellite observations to aid interpretation of the observed MYI evolution in the Arctic.
Sea ice is a crucial parameter of the Earth climate system. Its high albedo compared to water influences the oceans’ radiation budget significantly. The importance of monitoring arises from the high variability of sea-ice state and amount induced by seasonal change and global warming. GNSS reflectometry can contribute to global monitoring of sea ice with high potential to extend the spatio-temporal coverage of today’s observation techniques. Properties like ice salinity, temperature and thickness can affect the signal reflection. The MOSAiC expedition (Multidisciplinary drifting Observatory for the Study of Arctic Climate) gave us the opportunity to conduct reflectometry measurements under different sea-ice conditions in the central Arctic. A dedicated setup was mounted, in close cooperation with the Alfred-Wegener-Institute (AWI), on the German research icebreaker Polarstern that drifted for one year with the Arctic sea ice.
We present results from data recorded between autumn 2019 and spring 2020. The ship drifted in this period from the Siberian Sector of the Arctic (October 2019), over the central Arctic (November 2019 until May 2020) towards Svalbard (reached in June 2020). Profiles of sea-ice reflectivity over elevation angle (range: 1° to 45°) are derived with daily resolution considering reflection data recorded at left-handed (LH) and right-handed (RH) circular polarization. Respective predictions of reflectivity are based on reflection models of bulk sea ice or a sea-ice slab. The latter allows to include the effect of signal penetration down to the underlying water. Results of comparison between LH profiles and bulk model confirm a reflectivity decreases (about 10 dB) when surrounding open water areas vanish and the ship drifts in compact sea ice.
Results from autumn data (until mid-December 2019) have already been published [1,2] and comprise the following. First, estimates of sea-ice permittivity are obtained from mid-elevation range reflectivity (10° to 30°). The median of estimated permittivity 2.4 (period of compact sea ice) lies in the expected range of reported old ice type (mostly second-year ice). Second, anomalies in the low-elevations range of retrieved reflectivity (1° to 10°) give strong indication of signal penetration into the dominating second-year ice with influence of sea ice temperature and thickness. We conclude that sea-ice characterization in future can profit form GNSS reflectometry observations. The on-going study is currently extended to the further evolution of Arctic sea ice during winter and spring period of the MOSAiC expedition.
[1] Semmling, A. M., J. Wickert, F. Kreß, M. M. Hoque, D. V. Divine, S. Gerland, and G. Spreen (2021). “Sea-ice permittivity derived from GNSS reflection profiles: Results of the MOSAiC expedition”. IEEE Trans. Geosci. Rem. Sens. doi: 10.1109/TGRS.2021.3121993.
[2] Semmling, M., J. Wickert, S. Magnussen, T. Gerber, G. Spreen, L. Kaleschke, R. Ricker, and A. Tavri (2021). “GNSS signal power data for reflectometry recorded during the MOSAiC Expedition (leg 1)”. GFZ Data Services. https://doi.org/10.5880/GFZ.1.1.2021.002.
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites: Sentinel-3A and Sentinel-3B, launched respectively on 16 February 2016 and 25 April 2018. Among the on-board instruments, the satellites carry a radar altimeter to provide operational topography measurements of the Earth’s surface. Over sea-ice, the main objective of the Sentinel-3 constellation is to provide accurate measurements of the sea-ice sea surface height and the sea-ice radar freeboard. Compared to previous missions embarking conventional pulse limited altimeters, Sentinel-3 is measuring the surface topography with an enhanced spatial resolution, thanks to the on-board SAR Radar ALtimeter (SRAL), exploiting the delay-Doppler capabilities.
To further improve the performances of the Sentinel-3 Altimetry LAND products, ESA is developing dedicated and specialized delay-Doppler and Level-2 processing chains over (1) Inland Waters, (2) Sea-Ice, and (3) Land Ice areas. These so-called Thematic Instrument Processing Facilities (T-IPF) are currently under development, with an intended deployment by mid-2022. Over sea-ice the T-IPF will including new algorithms, in particular the hamming window and the zero-padding processing. Thanks to the hamming window, the waveforms measured over specular surfaces are cleaned from spurious energy spread by the azimuth impulse response. The zero-padding provides a better sampling of the radar waveforms, notably valuable in case of specular energy returns.
To ensure the missions requirements are met, ESA has set up the S3 Land STM Mission Performance Cluster (MPC), a consortium in charge of the qualification and the monitoring of the instrument, and core products performances. In this poster, the Expert Support Laboratories (ESL) of the MPC present a first performance assessment of the T-IPF level-2 products over sea-ice. In particular, an exhaustive inter-comparison with CryoSat-2 is performed, showing that the two missions provide similar performances in the estimated freeboard.
The quality step-up provided by the sea-ice thematic products, and highlighted in this poster, is a major milestone. Once the dedicated processing chain is in place for the sea ice acquisitions, the Sentinel-3 STM level-2 products will evolve and improve more efficiently over time, to continuously satisfy new requirements from the Copernicus Services and the sea-ice user community.
Sea ice can roughly be divided into seasonal (or first-year) ice that formed since last summer, and multiyear ice that has survived at least one summer melt. Due to brine rejection, multiyear ice is characterized by containing more air bubbles and much less salt than first-year ice, which is making older ice more rigid and solid. In addition to changing the physical properties, these differences lead to distinct emissivity and backscatter signatures that allow classification by satellite remote sensing. More than four decades of satellite monitoring of the Arctic shows that the sea ice has reduced significantly in extent and is becoming rapidly younger, going towards a more seasonal ice cover.
Going towards a more seasonal ice cover with close to ice-free conditions during summer is seen to have a crucial role for the Arctic shelf seas in destabilizing the upper ocean stratification: The loss of sea ice removes the insulator between ocean and atmosphere, reduces the freshwater input, and the protection of the Arctic cold halocline is lost. With a weakened upper halocline, we expect more heat flux from the Atlantic Water towards the surface which will prevent or postpone sea ice from forming in autumn. To better understand these comprehensive changes of the ocean it is necessary to have a thorough understanding of the seasonal to multidecadal trends and variability of the sea-ice type conditions.
The EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF) has since 2005 been monitoring the Arctic sea ice type on a daily basis. And since 2021, a classification of the Southern Hemisphere ice type has been implemented as well. For the computations are used both passive microwave radiometer (SSMIS, AMSR-2) and scatterometer (ASCAT) data, providing the ice type product on a 25x25 km grid spacing. Within the EU Copernicus Climate Change Service (C3S), the long series of subsequent passive microwave radiometers (SMMR, SSM/I, SSMIS) going back to 1978, is utilized to compute a climate consistent data record of Arctic sea-ice type.
In this contribution, we present the new Southern Hemisphere sea-ice type product which can be relevant for the expansion Copernicus Imaging Microwave Radiometer mission (CIMR). We also introduce the 40+ years of Arctic sea-ice type record from C3S. The general algorithm baseline is described and validation against other sea-ice classification products is shown.
Surface roughness is a crucial parameter in climate and oceanographic studies, constraining momentum transfer between the atmosphere and ocean, providing preconditioning for summer melt pond extent, while also closely related to ice age and thickness. At a local scale, roughness in the form of ridges, hummocks, rafted ice can slow down and hinder safe transport on the ice as well as be a hazard for ice strengthened vessels and structures. High resolution roughness estimates from airborne laser measurements are limited in spatial and temporal coverage while pan-Arctic satellite roughness have remained elusive and do not extend over multi-decadal time-scales. The MISR (Multi-angle Imaging SpectroRadiometer) instrument acquires optical imagery from nine near-simultaneous camera view zenith angles sampling specular anisotropy, since 1999. Extending on previous work to model sea ice surface roughness from MISR angular reflectance signatures, a training dataset of cloud-free pixels and coincident roughness from coincident operation IceBridge (OIB) airborne laser data is generated. Surface roughness, defined as the standard deviation of the within-pixel lidar elevations to a best-fit plane, is modelled using several techniques and Support Vector Regression with a Radial Basis Function kernel selected. Hyperparameters are tuned using grid optimisation, model performance is assessed using blocked k-fold cross-validation. We present a derived sea ice roughness product at 1.1km resolution over the OIB period of operation (2000 – 2019) and a corresponding time series analysis. To demonstrate the validity of the derived product, we first evaluate our roughness product against independent LiDAR characterisations of surface roughness consistent with our training data. We also evaluate our derived roughness product with known proxies of surface roughness on a pan-Arctic basis (AWISMOS CS2-SMOS sea ice thickness.) Both our instantaneous swaths and pan-Arctic monthly mosaics show considerable capacity in detecting newly formed smooth ice from polynyas, and detailed surface features such as ridges and leads.
COMPARISONS OF SEA ICE EXTRACTION BETWEEN GF-3 QUAD-POLARIZATION AND COMPACT-POLARIZATION SAR IMAGES
Kun Yang1 Haiyan Li1,2* William Perrie3
1. Key Laboratory of Computational Geodynamics, Chinese Academy of Sciences/University of Chinese Academy of Sciences, Beijing, China. *lihaiyan@ucas.ac.cn, yangkun20@mails.ucas.ac.cn
2. Institute of Oceanology, Chinese Academy of Sciences, Qingdao, China.
3. Fisheries and Oceans Canada, Bedford Institute of Oceanography, Dartmouth, Nova Scotia, Canada. William.Perrie@dfo-mpo.gc.ca
Sea ice is one of the most important indicators of global climate change and has deep influence on the marine environment and its impact on society [1]. In the context of global warming, classification of sea ice and mapping of ice conditions are becoming increasingly important. Currently, remote sensing data is the main source of information on large-scale sea ice conditions. In fact, this is the only source for such data. Among the numerous remote sensing data, synthetic aperture radar (SAR) observations have become the most important data for monitoring sea ice, due to its all-day, all-weather and high-resolution features.
SAR observations include sea ice surface roughness, volume structure, dielectric characteristics, etc., making them ideal for monitoring sea ice. At present, the development of SAR has progressed through three phases: single-polarization SARs (single-pol, VV or HH or HV), quad-polarimetric SARs (quad-pol, HH, HV, VH, and VV) and Compact Polarimetry (CP) SARs. For single-pol channels, the HH and HV are considered to be the best methodologies in sea ice classification [2]. But only intensity information is obtained by single-pol observations. Compared with single-channel data, multiple-channel data contain more observational information, which can improve sea ice extraction. However, observations from multiple channels have higher requirements on the antenna technology and power consumption of the SAR system. These limit the application of the quad-pol SAR system because of its high cost, complex data processing and smaller coverage [3]. Therefore, the π/4 compact polarized SAR was proposed by Souyris et al in 2005 [4]. The concept of compact polarization was developed by Raney [5], who proposed the circular polarization transmission and linear coherent dual-pol receive mode. This is denoted as the Hybrid Polarimetric (HP) of compact polarization. It has been implemented by the RADARSAT Constellation Mission (RCM, launched in June 2019) https://www.nrcan.gc.ca/radarsat-constellation-mission /21831, ALOS-2 launched in May 2014 https://global.jaxa.jp/projects/sat/alos2/ and Risat-1 launched in April 2012 https://www.isro.gov.in/Spacecraft /risat-1/ . The new SAR system essentially functions as a dual-pol SAR; but it can obtain similar information as quad-pol SAR [6] although with wide swath coverage of 350 km. The HP compact polarization SAR has been formulated as a new system because of its wide coverage and lower complexity in system design and maintenance. Therefore, it is important to study the potential application of HP compact polarization SAR in sea ice extraction.
In this study, we compare the sea ice extraction results from quad-pol SAR with those from compact polarization SAR, using with the GF-3 observations with the k-means algorithm. The results show that the classification results of compact polarization are close to those from quad-pol. This indicates that compact polarization has a good potential for applications in sea ice monitoring.
References:
[1]Barbat Mauro M., Rackow Thomas, Wesche Christine, et al. Automated iceberg tracking with a machine learning approach applied to SAR imagery: A Weddell sea case study. 2021, 172:189-206.
[2] Zheng Minwei, Li Xiaoming, Ren Yongzheng.Research on automatic detection method of polar sea ice by Gaofen-3 spaceborne synthetic aperture radar [J]. Acta Oceanologica Sinica,2018,40(09):113-124.
[3] Mahdianpari, M.; Mohammadimanesh, F.; McNairn, H.; Davidson, A.; Rezaee, M.; Salehi, B.; Homayouni, S. Mid-season Crop Classification Using Dual-, Compact-, and Full-Polarization in Preparation for the Radarsat Constellation Mission (RCM). Remote Sens. 2019, 11, 1582. https://doi.org/10.3390/rs11131582
[4] J. -. Souyris, P. Imbo, R. Fjortoft, Sandra Mingot and Jong-Sen Lee, "Compact polarimetry based on symmetry properties of geophysical media: the /spl pi//4 mode," in IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 634-646, March 2005, doi: 10.1109/TGRS.2004.842486.
[5] Raney R K.Hybrid-polarity SAR architecture. IEEE Transactions on Geoscience and Remote Sensing, 2007, 45(11): 3397-3404.
[6] Li, Haiyan & Perrie, William & Wu, Jin. (2019). Retrieval of Oil–Water Mixture Ratio at Ocean Surface Using Compact Polarimetry Synthetic Aperture Radar. Remote Sensing. 11. 816. 10.3390/rs11070816.
Multi-year ice (MYI) surface topography evolution and melt season processes are interrelated and sensitive to changing Arctic climate but not well studied. The Nansen Sound ice plug is a semi-permanent landfast sea ice feature in the Canadian Arctic Archipelago (CAA). The structure formed in winter 2016/17 and remained in place until 2019. The immobile ice enabled studying the evolution of radar backscatter and surface roughness for two consecutive melt and freeze seasons that are difficult to observe on mobile ice. The roughness of sea ice is a key factor for modeling and measuring the interactions between ice, atmosphere, and ocean. This study evaluates the ice plug's dual polarization Sentinel-1 SAR backscatter in spring 2017, 2018, and 2019. We interpreted these data with airborne surveys of the 3D surface topography from airborne laser scanning (ALS) and with ice thickness profiles from electromagnetic (EM) induction sounding in spring of the same years.
We used Sentinel-1 EW SAR scenes coincident with the EM ice thickness data to evaluate changes in springtime mean backscatter and linearized incidence angle dependence over two years. Additionally, we identified the melt pond distribution at the times of apparent maximum melt extent in Sentinel-2 true color images.
The EM survey results show that those parts of the ice plug that formed in-situ in winter 2016/17 had a mean thickness of 1.9 m in spring 2017, which is typical for landfast first-year ice in the CAA. The root-mean-square (rms) roughness within the 50-m-footprint of the EM measurements was 7 cm in 2017. The spatial distribution of melt ponds in summer 2017 was comparable to that in 2018 and variable over the ice plug. The resulting non-uniform summer ablation is expected to increase surface roughness from spring to autumn and over consecutive melt seasons. Surprisingly, the rms roughness was unaltered, which we attribute to the snow cover in spring which masks roughness features created by drained melt ponds in the previous summer. The mean ice thickness increased to 2.4 m in 2019. HH-polarized backscatter of the in-situ ice plug increased from -21 dB in spring 2017 to -14 dB in spring 2018, and to -13 dB in spring 2019. The incidence angle dependence decreased over the first melt season from -0.26 dB/1° to -0.15 dB/1° and remained constant thereafter. HV backscatter increased from -29 dB in 2017 to -25 dB in 2018, and -23 dB in 2019. The reported values are largely comparable to those found in other studies for MYI.
Approaches to predict summer melt pond fraction and ice thickness from winter backscatter have recently received increased attention. Drift ice originating from the Arctic Ocean got incorporated into the ice plug along the eastern shoreline of Nansen Sound upon freeze-up in 2016 and was surveyed in 2018 where it had a mean thickness of 3.7 m. This ice type had a rougher surface with rms of 10 cm, and the melt pond fraction was significantly lower. However, mean backscatter, backscatter variance and incidence angle dependence were similar to the younger and smoother ice formed in-situ. With this example we show that the age, ice type, and surface roughness of MYI needs to be considered together for accurate predictions of MYI melt pond fraction and thickness from SAR images.
While there has been a substantial declines of Arctic sea-ice thickness, there are insufficient observations to ascertain a long-term trend in Antarctic sea-ice thickness. To address this, we explore changes in sea-ice deformation as an indicator of changes in the Antarctic ice-thickness distribution. Using our Image Processing and Analyses System we calculate sea-ice displacement from SAR data and derive corresponding ice motion. In a previous study, with focus on a narrow region off East Antarctica, this system provided a multi-year time series of SAR-derived ice motion, which albeit of interannual variability indicates a speed-up of sea ice within the westward coastal current. However, SAR coverage off East Antarctica has historically been poor. Hence the system has been used to trial imagery from other sensors to derive ice motion, including Landsat and MODIS imagery and passive-microwave data. Specifications of the derived ice-motion products depend on the characteristics of the native data, resulting in a range of spatial and temporal resolutions, repeat frequencies and accuracies. Consequently, the products derived from the different sensors need to be unified into a coherent dataset with respect to their native length scales and sampling rates. This investigation also provides crucial information for parameterisation of the ice rheology in numerical models, which relates the various ice stresses with components of sea-ice deformation. Arctic sea-ice motion and deformation have previously been shown to be scale dependent, both basin wide and at regional scale. Here we investigate the kinematic characteristics of Antarctic sea ice at local and regional scale, with view to link magnitude and frequency of of ice-thickness redistribution events to the scale-dependent cohesion within the pack. This is also thought to provide information on the underlying processes driving sea-ice deformation, such as the passage of synoptic-scale systems or the transfer of wave energy from the Marginal Ice Zone into the pack-ice zone.
Sea ice classification information contributes to safe navigation and route optimization and reduces the risk for navigation in ice-covered waters. For this classification, satellite systems are widely used today and have a firm place especially in the maritime domain. Synthetic Aperture Radar (SAR) sensors are the most used image source for maritime situational awareness in general and sea ice classification in particular. Although possible cloud cover and low sun elevation in northern latitudes limit the use of optical data seasonally, the variety of available spectral channels also provides advantages for information extraction compared to radar remote sensing. In the context of sea ice classification, optical sensors provide valuable information based on the reflectance and emission properties of the materials such as spectral albedo which is very sensitive to sea ice thickness whose structure varies greatly. Therefore, optical systems can also make a valuable contribution as cooperative sensors. This study relies on Landsat-8, as a way to provide additional information for classification of sea ice in order to reduce the timeliness for providing updated ice information.
Traditional methods of ice detection using optical satellite imagery such as the normalized difference snow index, combine different regions of the electromagnetic spectrum. This methodology can be significantly enhanced with deep neural networks (DNN). Recent advances in DNNs have been shown impressive results in different fields of computer vision problems, including image segmentation. This study describes our DNN-based algorithm for ice classification as well as our training dataset created to train the network.
The quality of the training data is of highest importance for the quality of the classification result obtained. Nowadays, sea ice information is provided through various portals to support navigation in infested waters. One of these service providers is the German Ice Service, a department of the Bundesamt für Seeschifffahrt und Hydrographie BSH in Rostock, Germany. Data from various satellites and coastal ice observers are used to create these hand-crafted maps. Each ice chart provided is composed by two types of classes: Total ice concentration and Stage of development.
We decided then to use the BSH ice charts to annotate ice locations on Landsat images to create the training data for a pixel-based sea ice classification. We focused our study on total ice concentration, which includes several classes from 1/10 to 10/10 and Open Water. The input variables of the DNN are the top of atmosphere values from the subset of 5 Landsat spectral bands: aerosol coastal (1), blue (2), green (3), red (4), and nir (5) and the output corresponds to 4 classes of ice concentration and open water.
Due to the large amount of imagery that met the training data criteria, we decided to store these satellite products in a data cube (Open Data Cube, ODC). In the context of modelling the use of a data cube has several advantages. For example, data can be retrieved by various parameters such as date or spatial region, which saves time by eliminating the need to manually select and search for images. In addition, the data cube also allows for initial analysis to determine which days and regions are appropriate for training. Thanks to this technology, it was possible to more quickly identify the days with low cloud cover and sufficient ice cover and retrieve the data from several Landsat images for training the DNN.
The obtained results show significant improvements, while maintaining the original classes introduced by the BSH. The first evaluation of the accuracy by comparing the classification results based on the validation data, which were not part of the training data set, showed very promising results. Based on the native sensor resolution of 30 meters, the derived ice charts have high accuracy as well as very good spatial resolution. The chosen approach can be considered sensor-independent and can therefore be applied to other satellite data, such as Sentinel-2. It is planned for the future that the required ice classification for the requested areas can be automatically derived directly by using Landsat-8 satellite imagery, which are provided for the ice service in near real time directly after reception at the DLR ground station Neustrelitz.
Melt ponds are one of the most important and variable components of the Arctic climate change. Their occurrence during the melting season pays a key role on surface albedo and redistribution of solar energy due to a positive ice-albedo feedback. However, despite their impact on the Arctic energy budget, melt ponds are poorly represented in sea-ice models; not explicitly represented in in climate models being pointed as a source of uncertainty in sea ice prediction and responsible for the underestimates in the observed in sea ice extent in climate projections.
The present research shows the first approaches and results aimed at an improved understanding and prediction of melt ponds, with enhanced accuracy and spatial and temporal resolution using artificial intelligence over the Arctic.
To this end a multi sensor, with particular focus on radar and synthetic aperture radar (SAR) is taken as it is weather and illumination independent. For the data fusion and also to benefit from data heterogeneity and the non-linearity of variables, AI-based algorithms were used to exploit and retrieve information from big data, as well harnessing information of patterns which otherwise could not rely solely on human intervention.
For many remote sensing projects it is necessary to accurately identify snow and ice covered surfaces and, in particular, separate these surfaces from cloud contaminated regions within a satellite image. However, the accurate identification of cloud and snow/ice within images produced by passive satellite sensors is challenging as both often appear bright at visible wavelengths while having a low thermal contrast in the thermal spectral channels. This can lead to misclassification and errors in downstream retrievals, products and imagery. Typically, clouds are detected and excluded from satellite imagery by using approaches that compare reflectance values at visible wavelengths and brightness temperatures in the infrared spectrum to a series of predefined thresholds that – in general – are suited to tropical or subtropical regions. In the polar regions, however, these thresholds are unsuitable, frequently leading to cloud being detected as ice and vice-versa.
Here, I describe a new technique that makes use of the dual-view capability of the SLSTR instrument aboard the Sentinel-3 satellites. SLSTR records images in the nadir, vertical, view common to most satellite sensors but also records a separate set of images in an oblique view, angled 55 degrees to vertical. This second view allows exploitation of the parallax effect to detect clouds, even against cold and bright snow and ice surfaces: Clouds will appear to move between the nadir and oblique views due to their altitude, with high altitude clouds shifting more than low altitude clouds. I present a machine-learning approach that uses visible and infrared measurements from both SLSTR views to identify snow and ice, cloud and clear-ocean pixels within the instrument field of view. I compare the results of this new algorithm to existing – nadir only – techniques and to measurements from the active sensors aboard the ICESAT-2 mission. Finally, I highlight how such methods could also be applied to upcoming operational meteorology missions such as EUMETSAT’s MetOp-SG to enable more frequent and detailed observations of the polar regions.
Many Earth observation (EO) satellite data users lack the expertise, infrastructure, and internet bandwidth to efficiently and effectively access, pre-process, and utilize the growing volume of space-based data for local, regional, and national decision-making regarding aquatic ecosystems and associated resources. Furthermore, even sophisticated users of EO data still invest a large proportion of their time and effort into data preparation. This is a major barrier to full and successful utilization of space-based data and threatens the success of major global and regional initiatives supported by the Committee on Earth Observation Satellites (CEOS) and GEO-AquaWatch. As data volumes grow, and new applications would be possible, this barrier is becoming more significant for all users.
It was recognised by GEO-AquaWatch and CEOS Land Surface Imaging Virtual Constellation (LSI-VC) that the capacity of EO for inland, near-coastal, and coral reef waters has matured sufficiently to warrant the development of the first CEOS Analysis Ready Data for Land Aquatic Reflectance (CARD4L-AR) Product Family Specification (PFS) document. This became a joint activity over 2020-2021 and has resulted in the first CEOS endorsed CARD4L-AR PFS for non-oceanic aquatic ecosystems.
Several data providers are already demonstrating and implementing ARD from the same EO data sources (e.g., Landsat 8 and Sentinel-2) applying differing ARD generation procedures. This may lead to situations where ARD providers across the world may deliver different water reflectances, which eventually will lead to (potentially significantly) different water quality concentrations from the same EO image of the same lake. Such differences may be due to variations in ARD algorithms applied. By comparing ARD approaches, water reflectances and derived water products over the same location with the same core EO data, the reason behind differences in ARD, and their impact on products, can be better understood. That knowledge is essential to go to the next step which is to compare algorithms for translating ARD to water quality variables.
CARD4L are satellite data that have been processed to a minimum set of requirements and are organized into a form that allows immediate analysis (with a minimum of additional user effort) and interoperability, both through time and with other datasets. See https://ceos.org/ard/ for further details. GEO-AquaWatch and CEOS joined forces to develop the CARD4L-AR PFS for data collected by multispectral and hyperspectral sensors operating at the visible (VIS), near-infrared (NIR) and short-wave infrared (SWIR) wavelengths over coral reefs, near-coastal, and inland water bodies.
Using the CARD4L Surface Reflectance Product Family Specification (PFS) as a baseline, GEO-AquaWatch (https://www.geoaquawatch.org/), Water-ForCE (https://waterforce.eu) and other experts met virtually to review/modify and/or generate new Threshold and Target PFS requirements. About 80% of the AR PFS requirements corresponded with the already available land focused Surface Reflectance PFS requirements. However, the remaining 20% that are different are critical for successful EO of these non-oceanic aquatic targets.
Note that the definition of CARD is not exclusive or prescriptive. It reflects the attributes of fundamental measurement products for most global remote sensing users with land or aquatic imaging applications, and it represents the minimum level required to support easy time series analysis and data interoperability. There are four categories of metadata with specific 'Threshold' and 'Target' requirements:
1. General Metadata (17 requirements)
2. Per-Pixel Metadata (20 requirements)
3. Radiometric and Atmospheric Corrections (14 requirements)
4. Geometric Corrections (1 requirement)
Of these four categories of metadata the Per-Pixel Metadata and the Radiometric and Atmospheric Corrections categories required the most adaptation to be suitable for the aquatic community. Across the four categories there are a total of 52 requirements.
For the Per-Pixel Metadata one requirement was modified: the addition of Lake and River Ice Masks to Sea Ice Mask, while new Per-Pixel Metadata requirements were provided for the following:
• Adjacency Effects
• Altitude of the Water Body above Sea Level
• Bidirectional Reflectance Distribution Function
• Deep/Shallow Water
• Optically Deep or Optically Shallow Water Assessment
• Floating Vegetation/Surface Scum Mask
• Sky Glint
• Sun Glint
• Turbid Water Flag
• Whitecap/Foam Mask
For the Radiometric and Atmospheric Corrections one requirement was modified: The Atmospheric Reflectance Correction and nine new requirements were identified:
• Adjacency Effects Correction including a sometimes-occurring Surface Reflected Vegetation Spectral Correction
• Bidirectional Reflectance Distribution Function Correction
• Floating Vegetation/Surface Scum Correction
• Other Trace Gaseous Absorption Corrections
• Sky Glint Correction
• Sun Glint Correction
• Turbid Water Correction
• Whitecap/Foam Correction
Further information is available on the GEO-AquaWatch and CEOS CARD4L websites.
This CARD4L-AR PFS is an excellent example of GEO-AquaWatch and CEOS collaborating to mature the area of inland, near-coastal, and coral reef water EO. More requirements needed refining or defining than were initially expected. The AR PFS will be made available for data providers and community feedback from the CARD4L website soon.
The CEOS LSI-VC and the Working Group on Calibration & Validation (WGCV) have defined a formal process for how the PFS assessments are conducted, which includes a self-assessment by the data provider followed by a peer review by the WGCV (Product Assessment Process). These processes assess how well the data provider’s product complies with each of the PFS requirements, including a justification of ‘how’.
The water quality products that can be derived from this CARD4L-AR (at surface reflectance or upwelling radiance) through the application of retrieval algorithms (such as concentrations of chlorophyll, cyanobacteria, suspended matter, levels of coloured dissolved organic matter, and values of Secchi Disk transparency, vertical attenuation of light, and turbidity) are “Interpretation Ready Data” (IRD). A possible next step could be interpreted data, such as assessment of eutrophication or smothering of seagrass etc.
As this is the first version of the CARD4L-AR it is recommended that the following public providers of ARD data for aquatic ecosystems see how their products fit into this CARD4L-AR PFS document and how this could affect their products: e.g. USGS Aquatic Reflectance, ESA Lakes CCI (Lake Water-Leaving Reflectance), Copernicus Global Land Service (Lake Water Quality), Copernicus Marine Environment Monitoring Service (High-Resolution Ocean Colour Remote Sensing Reflectances) EUMETSAT, etc. As this CARD4L-AR PFS is the initial version there is scope each year to engage with CEOS and GEO -AquaWatch to enhance, update and improve this PFS. Providing CARD4L-conform data products also enables more downstream value adders, i.e. those who are not experts in EO data products, to develop and offer novel value-added products on the market.
Keywords: Inland and coastal water quality, optically shallow benthic mapping, Interoperability, comparison, validation, time series, end user trust, ARD, IRD.
Advances in new and conventional spaceborne sensing technologies has resulted in a proliferation of satellite sensor data providing unique opportunities for generating decision-making insights with unprecedented spatial detail and frequency. However sensor interoperability issues, cross-calibration challenges, and atmospheric contamination can stand in the way of realizing the full potential of these rich multi-source datasets. Plant Fusion (PF) represents a novel data fusion framework for aggregating observations from different imaging platforms. PF adopts an implementation of the CubeSat-Enabled Spatio-Temporal Enhancement Method (CESTEM) to leverage rigorously calibrated publicly accessible multispectral satellites (i.e., Sentinel-2, Landsat 8/9, MODIS, VIIRS) to work in concert with the higher spatial and temporal resolution data provided by the 180+ CubeSats part of the PlanetScope constellations. CESTEM is used to rigorously harmonize multi-sensor spectral data into a consistent radiometric surface reflectance standard for full fleet interoperability. The Framework for Operational Radiometric Correction for Environmental Monitoring (FORCE) generates a harmonized Sentinel-2 and Landsat 8 BRDF adjusted Surface Reflectance (SR) product that is used as the cross-calibration reference target during the CESTEM-based radiometric harmonization step. Planet Fusion processing also includes advanced temporally driven functionality related to geometric harmonization, cloud and shadow masking, and gap-filling in order to deliver a next generation analysis ready SR product (daily, 3 m) that is suitable for analytic and data science purposes and particularly beneficial for inter-day change detection, time-series analysis, phenological monitoring, disturbance monitoring, physically-based model retrieval, machine learning, and other applications relying on radiometrically and geometrically accurate and temporally consistent information. This presentation will provide a status update on Planet Fusion including 1) showcases of multi-year (2018 - 2022) SR time-series over a variety of cover types and cloud environments, 2) evaluations of interoperability with Sentinel-2 and Landsat data calculated using different atmospheric correction algorithms, and 3) validation against ground-based SR observations.
Disasters of many often related types, are increasing in frequency and severity, closely tied to the effects of climate change. Their potential impacts on the populations and the infrastructures need to be anticipated always earlier to support efficient and timely decisions by the responders. Moreover , the decisions need to be tailored to the local resources to avoid reaching a crisis situation where the needs supersede the resources.
Data sources, sensor types, volumes are increasing and the new Cloud technologies are enabling rapid processing if the disaster preparation workflows are modernized and regularly updated. There are currently several critical interoperability and technical challenges in getting the fantastic bonanza of EO and space-based data into the form, location, and accessibility of awareness needed to benefit populations and practitioners on the ground where these disasters occur.
Many of these challenges have been successfully addressed by the Disaster Pilot 2021, relying on a high level reusable architecture based on the concepts of Analysis Ready Data (ARD) to Decision Ready Information (DRI) and Applications to the Data. The works have revealed that these concepts are worth developing and standardizing further to accelerate the integration of solutions to specific use cases, facilitating the Search of ARD or DRI data, services or algorithms, making them interoperable over different sensors to be reusable over different geographical areas.
Well defined interoperable ARD products built out of the huge volumes of data being generated continually by multiple satellite missions, could be key to rapidly build downstream agile DRI supporting the real time needs and decisions.
It is important for us to engage with the critical stakeholders in the EO data exploitation ecosystem from the EO data providers, the EO processing experts, Cloud capacities experts, EO and all other connected domains scientists, as well as responders to raise awareness on the current results and go further on ARD and DRI standardization in preparation to the next OGC pilots.
Sentinel-1 is currently the only system to provide SAR images regularly on all lands on the planet. Access to these time series of images opens an extraordinary range of applications. In order to meet the needs of a large number of users, we have developed S1Tiling, an automatic processing chain to generate "Analysis Ready" time series for a very large number of applications.
This open-source project for building Sentinel-1 “Analysis Ready" time series, is now available for the EO community on a dedicated gitlab repository https://gitlab.orfeo-toolbox.org/s1-tiling/s1tiling . This software generates stacks of calibrated and orthorectified images over tiles. The tile reference, on which "Analysis Ready" time series are built, is the MGRS grid used by Sentinel-2 products. Mixing Sentinel-1 and Sentinel-2 time series is now simplified.
The development of S1Tiling has been driven by taking the following constraints:
- multipurpose: software should be adaptable to perform processing tasks according to user requirements
- multiplatform: software should run on large kind of computing platform, from laptop to high-performance and cloud computing
- scalable: software should process large amount of data as quick as possible by using data steaming, multithreading and parallel computing
- easy-to-use: software should be easy to install and to configure to define the processing to perform
- easy integration in yours projects: project should be fully documented for both users and developers
Considering these requirements, the development of the project was initiated in 2018, with a major release in 2021. Now, S1Tling is a mature project, integrated in many operational and scientific projects using Sentinel-1 data. As the software is freely available under Apache 2 license, it can be used by any research or operational projects requiring Sentinel-1 Analysis Ready Data.
S1Tiling builds ARD time series by executing automatically the following tasks, implemented in parallel processing pipelines:
- downloading GRD images from data providers
- checking product integrity
- calibrating data (to gamma0, sigma0 or beta0) and removing thermal noise
- applying radiometric terrain correction
- removing noise border
- orthorectifying on MGRS tiles
- concatenation of images
- applying speckle filtering: spatial or multichannel algorithms
As input, the user specifies the time period, the list of MGRS tiles to process and parameters of the processing to apply. After the run, the Sentinel-1 "Analysis Ready" time series are generated as set of GeoTiff on each tiles.
The software is based on state-of-the art technologies (Python, Dask, OrfeoToolBox, Docker), allowing both high performance, flexibility and ease-of-use.
S1Tiling is currently used for many applications, such deforestation detection in the Amazon, monitoring of rice crops in Southeast Asia, monitoring of water stocks in India or mapping agricultural areas in Europe. In addition, this software is accessible as an on-demand processing service on the French PEPS collaborative ground segment, in order to make it easier for users to use.
The proposed presentation will present all the functionalities and performance of S1Tiling for building “Analysis Ready” time series, the current status of the development, the roadmap for future development and some applications where S1Tiling has been used.
Today, several SAR missions provide valuable and extensive information for monitoring the Earth’s surface, for assessing the current state and to improve the understanding of ongoing changes. In the last decades the capabilities for data reception and storage have been continuously extended to allow routine operations of the growing amount of SAR data. Additional, significant effort has been put into the development of processing techniques to gain the full range of information being inherent in SAR data.
However, the increase of data volume and processing complexity requires appropriate resources for storage and processing to convert the SAR image content into geoinformation. Therefore, new strategies are needed for data management and provision. The Committee on Earth Observation Satellites (CEOS) has set up the CEOS Analysis Ready Data for Land initiative (CARD4L) that aims for the standardization of such products. One example is the radiometrically calibrated and orthorectified Normalized Radar Backscatter (NRB) Analysis Ready Data (ARD) product for single-polarized SAR data. In contrast, ARD for polarimetric SAR data is not that straight forward, as the information is in the complex scattering matrix and usually each application performs special filtering and decomposition methods for a specific data preparation. For the time being, the processing still starts from the basic Single Look Complex products. CEOS proposes several polarimetric ARDs that ease the utilization. However, these products are not really ready for use as they still require further pre-processing.
Here, we propose a polarimetric ARD product based on a Kennaugh element framework [1,2], that doesn’t require any further pre-processing. The idea is to produce ready to use layers for image analyzation e.g. deep learning applications. The product avoids complex data format to ease the user handling, radiometric calibration and orthorectification are applied as well as a normalization of the data in order to reduce their storage demand. A big advantage of the Kennaugh elements is their applicability to all polarimetric combinations (dual, quad, and even compact pol). Finally, even further polarimetric decomposition (e.g. Freeman-Durden, Yamaguchi) are possible.
The technique for generating the proposed polarimetric ARD is already implemented in DLR’s Multi-SAR System and was applied in several projects and international collaborations (e.g. [3,4]). It supports TerraSAR-X, PAZ, RADARSAT-1/2, ALOS-PALSAR, Sentinel-1, ERS-1/2, ENVISAT-ASAR and newly the RADARSAT Constellation Mission (RCM).
The proposed presentation shall introduce the Kennaugh framework, the ARD product layout and highlight the applicability with examples.
[1] Schmitt, A.; Wendleder, A.; Hinz, S. The Kennaugh element framework for multi-scale, multi-polarized, multi-temporal and multi-frequency SAR image preparation. ISPRS J. Photogramm. Remote Sens. 2015, 102, 122–139. https://doi.org/10.1016/j.isprsjprs.2015.01.007
[2] Schmitt, A.; Wendleder, A.; Kleynmans, R.; Hell, M.; Roth, A.; Hinz, S. Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases. Remote Sens. 2020, 12, 6, 943. https://doi.org/10.3390/rs12060943
[3] Klingebiel, C.; Schmitt, A.; Wendleder, A. & Moser, L. Lac Bam imaged by TerraSAR-X - Classification and Visualisation of Seasonal and Annual Changes. In: Proceedings of the European SAR Conference 2021, 29. March - 01. April 2021, Leipzig, Germany, 2021, pp. .537 -542.
[4] A’Campo, W.; Bartsch, A.; Roth, A.; Wendleder, A.; Martin, V.S.; Durstewitz, L.; Lodi, R.; Wagner, J.; Hugelius, G. Arctic Tundra Land Cover Classification on the Beaufort Coast Using the Kennaugh Element Framework on Dual-Polarimetric TerraSAR-X Imagery. Remote Sens. 2021, 13, 4780. https://doi.org/10.3390/rs13234780
If we ask ourselves “what would be ideal for backscatter time-series analysis?”, a common answer might be: “a data cube covering my area of interest, allowing for trend-analysis over time”.
Time-series analysis of backscatter and change detection approaches were long limited by the track (relative orbit) and swath-width of the sensor: no automated standard existed for merging data acquired from multiple tracks. To our knowledge, no wide-area backscatter data cube product was available for large areas of interest. Sparse data coverage from early sensors, combined with a lack of methodology for automatically building a backscatter data cube, limited the construction of data cubes.
Although the need for terrain correction of SAR imagery has been clear since the early days, efforts were first concentrated on geometric terrain correction (GTC), i.e. placing the backscatter values in their appropriate map grid location. However, the radiometry of the calibrated backscatter values was until recently typically either left unchanged from the L1 product, using the original ellipsoid model tied to the slant range (e.g. SLC) or ground range (e.g. GRD) product, or adjusted based on a so-called “local incidence angle” modulation.
When generating a normalised radar cross section product (NRCS), the sigma0 backscatter convention (whereby a standard area on the ground is used as the reference area) was long dominant. The alternative backscatter convention gamma0 (where the standard area is expressed in the plane perpendicular to the local look direction) more directly reflects the area on the ground seen by the radar, and has gained more acceptance in recent years.
The paper [1] in 2011 demonstrated that the so-called “local incidence angle” conceptually fails to adequately model the SAR imaging process: indeed, it often introduces additional noise to local backscatter estimates. Use of the terrain-flattened gamma nought radiometrically terrain corrected “RTC” standard was shown to provide a more robust backscatter standard.
The Sentinel-1 constellation has heralded a new era of open data combined with global coverage that enables the construction of wide-area backscatter data cubes. New methodologies developed within the last 10 years enable (a) a consistent radiometric terrain correction, and (b) automated compositing of wide-area backscatter maps [2].
We summarise with a broad evolution of backscatter estimation:
• Sigma0 with ellipsoid-based radiometry
• Sigma0 with “local incidence angle” modulation
• Terrain-flattened Gamma0 with integration of area of DEM facets under Gamma0 convention
• Local Resolution Weighting for seamless wide-area backscatter composites
We demonstrate applications of these wide-area backscatter composite products, with examples from forestry, tracking distinctive polarisation signatures from broadleaf vs. coniferous stands, and rapid detection of storm disturbance.
We also demonstrate applications from wet-snow mapping. In the VH-polarisation Sentinel-1 RGB composite image covering the Alps, backscatter is relatively low when wet snow is present. The highest peaks are only wet in summer (low blue, high red and green: yellow). Other colours in the mountains indicate the season when wet-snow dominated the local elevation. The RGB composite illustrates how arbitrary time-slices can be compared nearly seamlessly with each other over an extensive region, including multiple changes between tracks (relative orbits). Using our approach first described in a paper [2] currently in “early access”, auxiliary products accompanying each composite provide additional quality layers to help users gauge the local relative “trustworthiness” of backscatter estimates affected by (a) steep or moderate terrain, and (b) differing acquisition geometries and relative orbits.
We conclude by demonstrating how data from multiple satellite constellations can be integrated together to generate backscatter composites with a temporal coherency that is impossible to achieve using data from each constellation in isolation.
[1] Small, D. (2011). Flattening Gamma: Radiometric Terrain Correction for SAR Imagery. IEEE Transactions on Geoscience and Remote Sensing, 49(8), 3081–3093. doi: 10.1109/TGRS.2011.2120616
[2] Small, D., Rohner, C., Miranda, N., Rüetschi, M., & Schaepman, M. E. Wide-Area Analysis-Ready Radar Backscatter Composites. IEEE Transactions on Geoscience and Remote Sensing, 14p. doi: 10.1109/TGRS.2021.3055562
Alps backscatter composite: R=2021.01.01-12, G=2021.04.25-05.06, B=2021.06.24-07.05
Contains modified Copernicus Sentinel data (2021)
We have developed and make available a set of Sentienl-1 processing code and algorithms for routine generation of InSAR timeseries that are in analysis-ready (ARD) form. Here ARD refers to products in common coordinates rather than radar range-Doppler reference, with all phase compensation terms from variable viewing geometry and topography applied. Since the data are in ARD formats, end users require little to no radar processing expertise to be able to productively and easily incorporate InSAR analysis into their applications and research.
Our code package at present uses Sentinel-1 TOPS data to provide worldwide coverage and much temporal availability. Adapting it for other sensors requires the addition of the SAR focusing code capable of supporting other sensors. Since Sentinel-1 provides such a comprehensive set of data at no cost to users, we have limited our implementation to that source for now.
Our approach is as follows. We use a backprojection algorithm to focus TOPS mode wide swath acquisitions directly to locations specified by a digital elevation model. The code defaults to use the Copernicus DEM for registering all products, but another DEM may be provided as needed. The single look complex images are automatically compensated for all propagation phases, including elevation, so that they align exactly and interferograms can be formed by simple cross multiplication. After phase unwrapping using the Snaphu algorithm, we select 100’s to 1000’s of reference pixels per scene to minimize atmospheric influence on the final timeseries, and we further reduce the tropospheric phases by regression against elevation. The final step is applying the SBAS algorithm to generate timeseries.
We find the package easy to use and reasonably fast compared to other approaches. Much of the efficiency is due to the use of the backprojection algorithm rather than range-Doppler, so that it is not necessary to oversample the azimuth phase history and its consequent precision alignment needs. If systems contain GPU elements, these are used as well for faster computations.
The combination of a simple interface plus data products in friendly coordinates are a good demonstration of the possibilities of designing in analysis-ready capabilities in a radar processing system. We hope to get significant feedback and improvements from the wider community that can help improve our ability to provide useful observations for many applications.
One of the consequences of the opening of the Landsat data archive as free and open data in 2008 was the possibility to exploit a long time series of national or continental scale high resolution image data. At the same time computational resources became efficient and achievable so that this possibility became a real option. Ultimately, cloud computing freed the user from the often insurmountable burden of downloading extremely large amounts of data, and made the required large computing resources accessible and affordable to everyone. However, all users interested in analyzing the data were faced with the need to prepare the data stack such that it fits their analysis and the need for a cross-sensor harmonized, commonly pre-processed dataset. Activities in the US and Australia eventually led to the concept of Analysis Ready Data, employed e.g. by NASA to the Landsat archive (Dwyer 2018), and eventually formalized by CEOS in the ARD PSFs.
ARD comprises the technical steps of data clipping, unusable data masking, atmospheric correction, pixel alignment, and sensor alignment (Holmes 2018). Metadata at product level as well as pixel level provide full traceability of applied data processing applied (CEOS CARD4L PSF).
The benefit of such standardized preprocessing comes at the expense of increased uncertainty of the radiometric measurements and its geolocation due to the processing applied (uncertainty of auxiliary data used, such as aerosols or DEM, methodological errors, law of error propagation). It also imposes one certain image grid, projection and tiling, as well as definition of what is considered as an unusable pixel. However, these are all product properties which depend on the type of analysis to be carried out. Here is a weak point of the concept of ARD, namely that it has to assume a certain type of analysis, in order to fulfil the ARD requirement to organise data in “a form that allows immediate analysis with a minimum of additional user effort”. For example, the analysis of the water leaving reflectance has much more stringent uncertainty requirements compared to many land applications and is very skeptical to any reprojection, resampling and spectral interpolation. However, it is possible to mitigate this risk by generating ARD on demand.
The Copernicus Land Monitoring Service is providing with the Sentinel 2 Global Mosaic (S2GM) Service which has the objective to support land monitoring and agriculture to provide information on land cover, land use and land use change, cultural heritage sites, ground motion, urban areas, inland water quantity and quality, forests, agriculture and other natural resources, biodiversity and cryosphere. It shall provide Analysis Ready Data for these applications and related users. A specific requirement for the S2GM service is to provide a “best pixel approach” in order to provide consistent spectral information, most representative for the state of the environment. S2GM is a service and delivers ARD on demand, with configuration by of various parameters by the users and cloud based processing triggered by the user. This has the advantage to provide flexibility to the user but also has the consequence of data products which differ according to user’s options. We will use the example of the S2GM service to demonstrate and discuss the general questions on ARD raised above.
A further step towards flexibility and adaptation to user’s needs for ARD is the xcube open-source Python package and toolkit (https://xcube.readthedocs.io/en/latest/index.html). It has been developed in a generic way to provide Earth observation data in an Analysis Ready form to users. xcube achieves this by carefully converting EO data sources into self-contained data cubes that can be published in the cloud. Such a generic approach provides the tools which can be deployed in specific applications to address the challenges of generating ARD. We will use xcube as a second example to address the ARD challenges under a generic view angle, and discuss the challenges, possibilities, advantages and also its limitations.
CEOS CARD4L PSF: https://ceos.org/ard/
Dwyer, J.L.; Roy, D.P.; Sauer, B.; Jenkerson, C.B.; Zhang, H.K.; Lymburner, L. Analysis Ready Data: Enabling Analysis of the Landsat Archive. Remote Sens. 2018, 10, 1363. https://doi.org/10.3390/rs10091363
Holmes, Chris, 2018: Analysis Ready Data Defined. Planet Stories https://medium.com/planet-stories/towards-on-demand-analysis-ready-data-f94d6eb226fc
S2GM Service: https://land.copernicus.eu/imagery-in-situ/global-image-mosaics/node/16
Aiming at the convergence between Earth observation (EO) Big Data and Artificial General Intelligence (AGI), where AGI is superset-of (with inheritance) computer vision (CV), this paper identifies an innovative ambitious, but realistic EO optical sensory image-derived semantics-enriched Analysis Ready Data (ARD) product-pair and process gold standard as linchpin for success of a new notion of Space Economy 4.0.
In recent years, the notion of EO optical sensory image-derived ARD, which includes quality layers to manage EO optical image uncertainty (vice-versa, veracity), such as Cloud and Cloud-shadow masks, has been promoted by relevant portions of the remote sensing (RS) meta-science community to enable expert and non-expert end-users of space technology to access radiometrically calibrated EO large image databases ready for use in quantitative analytics of scientific quality, without requiring laborious EO image pre-processing for geometric and radiometric enhancement, preliminary to EO image processing (analysis).
The concept of ARD has been strictly coupled with the notion of EO big (raster-based) data cube, proposed by the RS community as innovative midstream EO technology.
Unfortunately, a community-agreed definition of EO big data cube does not exist yet, although several recommendations and implementations have been made. A community-agreed definition of ARD, to be adopted as standard baseline in EO data cube implementations, does not exist either. As a consequence, in common practice, many EO (raster-based) data cube definitions and implementations do not require ARD and, vice versa, an ever-increasing ensemble of new (supposedly better) ARD definitions and/or ARD-specific software implementations is proposed by the RS community, independently of a standardized/harmonized definition of EO big data cube.
Hereafter, a comparison of four existing EO optical image-derived Level-2/ARD product definitions and of four ARD-specific software implementations are proposed (see Table 1 and Figure 1) for critical assessment at the Marr five levels of system understanding, namely: (i) Outcome and process requirements specification. (ii) Information/knowledge representation. (iii) System design (architecture). (iv) Algorithm. (v) Implementation. This original comparison reveals the following.
First, the ASI PRISMA data-derived Level-2 product definition and software implementation appear peculiar, sometime more loose (relaxed), e.g., no Cloud-shadow quality layer detection is required, unlike the ESA Sentinel-2 Level-2 thematic co-product taxonomy, but sometime stricter in requirements, e.g., more land cover (LC) classes are investigated at the PRISMA intermediate Level-1 semantic co-product, including LC class Forest and LC class Crop, than the remaining three Level-2/ARD product definitions. Noteworthy, the US Geological Survey (USGS)-National Aeronautics and Space Administration (NASA) ARD definition is provided with two sensor series-specific software implementations, namely, the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software executable for Landsat-4/-5/-7 imagery and the Landsat Surface Reflectance Code (LaSRC) software executable for Landsat-8 imagery. The USGS-NASA ARD definition coincides with the ARD definition (without software implementation) proposed by the international Committee on Earth Observation Satellites (CEOS) in the ARD for Land Optical Surface Reflectance (CARD4L-OSR) initiative. Moreover, it is less restrictive/specialized than (it is superset-of) the ESA EO optical image-derived Level-2 product definition adopted by the Sentinel 2 (atmospheric, topographic and adjacency) Correction Prototype Processor (Sen2Cor), the sensor-specific Level-2 software implementation developed by ESA for Sentinel-2 imagery, run by ESA and/or distributed by ESA free-of-cost to be run on the user side.
Second, the comparison proposed in Table 1 and Figure 1 shows that none of these four ARD/Level-2 product definitions complies with the Cloud/Not-Cloud = Rest-of-the-world taxonomy proposed to ESA by the Copernicus data Quality Control (CQC) team for a semantic/ontological level of interoperability of EO sensory-data derived Level-1 and Level-2/ARD products.
Third, the comparison proposed in Table 1 and Figure 1 reveals that, among the four existing ARD/Level-2 product definitions, the more severe (constrained) and ambitious (informative) definition is the ESA Sentinel-2 data-specific Level-2 product-pair definition. The ESA Sentinel-2 imaging sensor-specific Level-2 output product-pair definition is peculiar because twofold. It consists of one output quantitative/numerical (sub-symbolic) variable co-product stacked (overlapped) with one output data-derived qualitative symbolic (categorical and semantic) variable co-product, referred to by ESA as Scene Classification Map (SCM).
To overcome limitations of existing EO optical sensory image-derived Level-2/ARD product definitions and software implementations, an innovative semantics-enriched ARD product-pair and process gold standard is proposed to be community-agreed upon. Required to be systematically generated in operational mode at the space segment and/or midstream segment in a new notion of Space Economy 4.0, an innovative multi-sensor EO optical sensory image-derived semantics-enriched ARD co-product pair consists of:
I. An ARD numerical (sub-symbolic and raster-based) co-product, consisting of an EO optical image, either panchromatic (PAN), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS), radiometrically calibrated into a sequence of top-of-atmosphere (TOARF) values, surface reflectance (SURF 1-of-3 to SURF 3-of--3) values corrected from atmospheric, topographic and adjacency effects, and surface albedo values, corrected from bidirectional reflectance distribution function (BRDF) effects, in agreement with the intergovernmental Group on Earth Observations (GEO)-Committee on Earth Observation Satellites (CEOS) Quality Assurance Framework for Earth Observation (QA4EO) Calibration/Validation (Cal/Val) requirements.
This ARD numerical co-product is systematically overlapped (stacked) with:
II. An ARD symbolic (categorical, semantic and vector-based) co-product, referred to as SCM, whose thematic map legend (taxonomy, vocabulary) includes quality layers Cloud and Cloud-shadow, which improves and generalizes the existing well-known ESA EO Sentinel-2 imaging sensor-specific Level-2 SCM co-product.
Noteworthy, both SCM (referred to as land cover) and surface albedo (referred to as albedo) are included in the list of terrestrial Essential Climate Variables (ECVs) defined by the World Climate Organization (WCO), which complies with the intergovernmental GEO second implementation plan for years 2016-2025 of a new Global Earth Observation System of (component) Systems (GEOSS), regarded as expert EO data-derived information and knowledge system, in agreement with the well-known Data-Information-Knowledge-Wisdom (DIKW) hierarchical conceptualization where, typically, information is defined in terms of data, knowledge in terms of information and wisdom in terms of knowledge.
Required to be systematically generated in operational mode at the space segment and/or midstream segment in a new notion of Space Economy 4.0, the proposed innovative semantics-enriched ARD co-product pair definition overcomes the shortcoming of existing ARD definitions and software implementations, which do not include “standardised and informative end user products required by national agencies tasked with coordinating implementation of the United Nations Sustainable Development Goals (SDGs), starting from land cover and its change over time, that contribute to the mapping and reporting on 14 of the 17 SDGs”.
Given all of the above, in [1] and [2], first, an innovative multi-source single-date semantics-enriched EO optical image-derived ARD product-pair requirements specification is proposed. Next, ARD-related CV system solutions are investigated at the Marr five levels of processing system understanding, in compliance with the FAIR criteria for scholarly/scientific digital data and non-data (e.g., analytical pipelines) management and with the GEO-CEOS QA4EO Cal/Val requirements.
In a new notion of Space Economy 4.0, an innovative semantics-enriched ARD product-pair and process baseline, to be instantiated in operational mode at the space segment and/or midstream segment by both public and private EO big data providers, is regarded as necessary-but-not-sufficient “horizontal” (enabling) precondition, eligible for:
(1) Consideration by the RS meta-science community as new reference standard of a Copernicus Harmonized ARD (HARD) information product and process. In recent years, the Copernicus HARD initiative aimed at fostering interoperability and synergy across existing Sentinels data, future Copernicus missions, e.g. Sentinel Next Generation, Copernicus Hyperspectral Imaging Mission (CHIME), Copernicus Land Surface Temperature Monitoring (LSTM), etc., and the increasingly growing heterogeneous EO Big Data provided by the Copernicus Contributing Missions (CCMs).
(2) Transforming existing EO big (raster-based) data cubes at the midstream segment, including the European Commission (EC) Data and Information Access Services (DIAS) implementations, which are typically affected by the so-called data-rich information-poor (DRIP) syndrome, into a new generation of semantics-enabled EO big raster-based numerical data and vector-based categorical (symbolic, semi-symbolic or sub-symbolic) information cube management systems, capable of semantic content-based image storage/retrieval and semantics-enabled information/knowledge discovery at the midstream segment.
(3) Boosting the downstream segment in the development of an ever-increasing ensemble of “vertical” (deep and narrow, specialized) user-specific and domain-dependent semantics-enabled value–adding information products and services (VAPS), suitable for use by a potentially huge worldwide market of institutional and private end-users of space technology.
References
[1] Baraldi, A., Sapia, L. D., Tiede, D., Sudmann, M., Augustin, H. L. and Lang, S. (2021). Innovative Analysis Ready Data (ARD) product and process requirements, software system design, algorithms and implementation at the midstream as necessary-but-not-sufficient precondition of the downstream in a new notion of Space Economy 4.0 - Part 1: Problem Background in Artificial General Intelligence (AGI). Accepted for publication, Big Earth Data, 00, 30-40.
[2] Baraldi, A., Sapia, L. D., Tiede, D., Sudmann, M., Augustin, H. L. and Lang, S. (2021). Innovative Analysis Ready Data (ARD) product and process requirements, software system design, algorithms and implementation at the midstream as necessary-but-not-sufficient precondition of the downstream in a new notion of Space Economy 4.0 - Part 2: Software developments. Accepted for publication, Big Earth Data, 00, 30-40.
The ESA project “Fiducial Reference Measurement for Soil Moisture (FRM4SM)” (May 2021 – May 2023) aims to identify and create standards for independent, fully characterized, accurate and traceable in-situ soil moisture measurements with corresponding independent validation methods and uncertainty estimations towards a maximum Return On Investment (ROI) for a satellite mission. The ultimate goal is to deliver confidence in soil moisture data products for the whole duration of a satellite mission.
Ground reference data from the International Soil Moisture Network (ISMN: https://ismn.earth/en/, Dorigo et al., 2021: https://hess.copernicus.org/articles/25/5749/2021/) builds the in-situ base for this study by following, building and improving the integrated standardized measurement protocols and quality techniques for in-situ soil moisture data. SI traceability, error propagation and data quality are thoroughly examined to better understand uncertainty estimations supporting earth observation (EO) satellite mission results.
Investigations for independent validation methods will focus efforts towards the Soil Moisture Ocean Salinity (SMOS: https://earth.esa.int/eogateway/missions/smos) mission from ESA at the resolution of passive space borne radiometer within the FRM4SM project. Indeed, SMOS is characterized by a footprint of ~43km in average, which implies that the surface observed by the instrument is very different from the in-situ measurements obtained by local scale. This project will investigate the spatiotemporal scale mismatch between the SMOS resolution and the in-situ representativeness. Based on the global validation of SMOS on the ISMN, several aspect are studied here as the validation sensitivity to the in-situ configuration (probes depth, technologies…), the geophysical SMOS footprint characteristics (landcover, soil characteristics…), or the effect of the different sampling strategy (in-situ driven vs. satellite driven).
The Quality Assurance for Soil Moisture (QA4SM: https://qa4sm.eu/) service is an easy-to-use interface for the comparison of satellite soil moisture data against land surface models and in-situ data (ISMN). This service targets to implement all created FRM protocols from ground measurement to validation methods created within the FRM4SM project.
Within this session, we want introduce ESA`s Fiducial Reference Measurements for Soil Moisture (FRM4SM) project, working on the identification and guidlines of SI traceable, accurate, high quality in-situ soil moisture data towards a fully traceable satellite validation service.
We present the results of the external evaluation of the thematic data products (TDP) of the ESA Cryo-TEMPO project. The Cryo-TEMPO study aims to deliver a new paradigm of simplified, harmonised, and agile CryoSat-2 products, that are easily accessible to new communities of non-altimeter experts and end users. The study will generate five new TDP, covering sea ice, land ice, polar ocean, coastal ocean and inland water. Throughout the project, the products will be constantly evolved, and validated by a group of thematic users, thus ensuring optimal relevance and impact for the intended target communities. Here we present the results thematic users achieved using the first set of TDP in the first phase of the project. We also present the planned improvements that will be introduced to the set of TDP’s in the second phase of Cryo-TEMPO.
Sea Ice:
There are two different use cases for the sea ice TDP: support for winter navigation and sea ice modelling, in particular the assimilation of the sea ice TDP with the help of an adjoint data assimilation system.
In the winter navigation part, estimates for the end of navigation season were derived from the freeboard estimates in the TDP. In particular, using simple threshold values for Polar Class (PC) 5 and PC 3 vessels, we show that the TDP enables quantifying lengthening of the navigation season when using a more powerful PC3 vessel instead of PC5. The study pointed out a need to extend the temporal coverage to include summer. Also, it is clear that all of the user needs can not be addressed using freeboard measurements only, but the TDP must be used as part of a large variety of satellite derived sea ice products.
An adjoint data assimilation system (NAOSIMDAS) has been successfully applied since 2015 for seasonal sea ice predictions in the international framework of the ARCUS SIO (https://www.arcus.org/sipn/sea-ice-outlook/background). Among other sea ice products a L3 sea ice thickness (SIT) product derived from CryoSat-2 by AWI has been assimilated which turned out to be the most important sea ice product for improving the seasonal sea ice predictions. Although the assimilation of SIT (in combination with a L3 snow depth (SND) product) about halved the root-mean-square error in the sea ice extent in September - even if the forecast is initialised end of May - comparison of radar freeboard from the sea ice TPD does not show improvements in the simulated radar freeboard (note, because radar freeboard is no state variable of the sea ice model it has to be calculated with the help of an observation operator). In the next phases of Cryo-TEMPO L1 radar freeboard from the sea ice TPD will be assimilated in combination with L1 snow freeboard (from IceSat2) and the benefit for seasonal prediction in comparison with the results from L3 SIT and SND assimilation will be evaluated.
Land Ice:
The focus of the land ice user evaluation is the application of observations of ice sheet surface elevation change in ice sheet modelling. Specifically, a Bayesian calibration of an ensemble of ice sheet model simulations was performed for the Amundsen Sea Sector in West Antarctica. The model ensemble, in which various model parameters areperturbed, produces a wide distribution of sea level contributions. By scoring each ensemble member on its ability to reproduce the observed surface elevation change derived from the land ice TDP, while also accounting for the observational uncertainty, the uncertainty in the sea level response can be reduced.
Polar Oceans:
Analysis of the PO TDP shows the significant improvement in the spatial data coverage in the central Arctic (reduced Polar Gap) in comparison with the altimeter data provided by the Copernicus Marine Environment Monitoring Service (CMEMS) and other ESA altimeter products. Furthermore, the circulation features in the Arctic are better resolved in the new Cryo-TEMPO data. Due to lack of in-situ data in the Arctic, for validation we use TOPAZ ocean reanalysis, which is the Arctic component of the CMEMS. The seasonal and interannual variability of the Sea level variability in the Beaufort Sea (BS) and Eurasian Basin (EB) is studied in detail. The high/low pressure anomalies over the BS and EB are found to have a significant impact on the sea level variability of the two regions. The new altimeter products are also used to examine the variability in the spatial extent of the freshwater reservoir of the Arctic, the Beaufort Gyre (BG), during the time-period 2011 to 2020.
Coastal Oceans:
The user evaluation of the CO TDP included comparison with in-situ observations: specifically, from tide gauges in the western part of the Mediterranean basin and from SOCIB HF-radar located in the Balearic Sea. Analysis of sea level anomaly indicates a good consistency between the CO TDP and the tide gauge records. The correlations are in-line with those previously reported for Sentinel-3A and Jason-3 (although characterised by larger root mean square error). The comparison between satellite based across-track geostrophic currents and the surface currents from the HF radar shows also good consistency. In this case, errors between the two are smaller than previously reported in the same region from an analogous comparison using Saral-Altika altimeter observations.
Inland Water:
The analysis focuses on the evaluation of the water level products based on three retrackers (MLE4, OCOG, TFMRA) over water bodies such as rivers and lakes of different sizes and environments. Specifically, the TUG role is to provide the hydrological validation of the products through the evaluation of the capability of the TDP to fulfil user requirements and to respond adequately to the scientific questions related to 1) flood prediction and forecasting activities, 2) water management demand and supply and 3) climate analysis.
All passive microwave satellite sensors are characterized by their low spatial resolution of several hundreds of kilometers square, and the satellite, Soil Moisture and Ocean Salinity (SMOS), is one of those. Quantifying the accuracy of the derived geophysical parameters is challenging due to the difference of spatiotemporal resolution with the in-situ measurements, assumed to be the reference. Actually, the representativeness of the in-situ measurement is within a range of a few square meters around the in-situ device with an hourly sampling, while satellite only revisit at a given time one of several days apart (revisit). Besides, ground measurements also have their own limitations and inaccuracies.
In this context, the ESA’s project Fiducial Reference Measurement for Soil Moisture (FRM4SM) is investigating, among other activities, the validation strategy of SMOS soil moisture estimation as well as the uncertainties assessment. The objectives of the present study are twofold: 1) to evaluate the SMOS accuracy requirement of 0.04m3/m3 on specified committed areas and 2) to define the uncertainties elsewhere regarding the geophysical surface conditions.
The method proposed here to fulfill these objectives is to assess a global validation of SMOS, then to define the relation between geophysical descriptors and the validation statistical scores. First, a global validation process is computed by using the International Soil Moisture Network (ISMN) database as reference. This global database gathers data from more than 2800 harmonized soil moisture in-situ stations supported by ESA and the Vienna University of Technology (TUWIEN). Then, sensitivity analyses are performed to characterize the influence of probe configuration and the geophysical characteristics of the SMOS footprint. Finally, maps are computed with several geophysical condition thresholds corresponding to a specific range of uncertainties.
The validation chain used here is composed of 3 main steps: the masking/filtering step for the two databases, the spatiotemporal collocating step, and finally the computation of statistical scores to compare the two data (R, SDD, bias). In the first step, Radio Frequency Interference (RFI) are filtered in order to remove observations contaminated by anthropogenic emissions. In the second step, the spatial collocation, the nearest SMOS pixel (DGG node) is attributed to each probe location. Concerning the temporal collocation, for each SMOS value, the nearest in-situ value is attributed (within a limit of 30 min). Then, finally, the third step compares the SMOS and in-situ time-series through the statistical scores.
The first validation benchmark of SMOS soil moisture considers the whole ISMN database, with more than 5500 probes and 1600 SMOS pixels. Overall, it shows a global agreement between SMOS and the whole ISMN soil moisture database (R=0.462, SDD=0.087m3/m3 and bias=-0.069m3/m3) and confirms the method consistency. However, those performances have been computed without any assumption on the probe depth, technologies, or location. The analysis of the score through the probe-depth point of view shows a worsening of the performance as depth increases. The correlation results show an improvement higher than 0.1 when considering only the probes within the first 10cm. As a result, the rest of the analysis only considers the probes within the first 10cm (45% of the whole validation results).
This presentation will show maps that are derived from our analysis to geographically represent a range of expected uncertainties considering specific surface conditions. To derive these maps, SMOS auxiliary databases were used. These auxiliary databases describe the landcover (IGBP classification) and soil texture (SoilGrid) in terms of vegetation, topography, water presence, sand and clay content of the soil, bulk density, etc. Globally, the scores show an improvement of the performance with a minimization of forest, topography, water, and ice in the footprint. Concerning the soil parameters, the scores improve when the footprint is sandier, has less clay, and has a high bulk-density. This strategy can be applied to other datasets, such as SMAP and AMSRE.
Earth Observation (EO) data provide consistent and spatially explicit measurements which have proven to be a highly valuable resource for understanding how plants function and contribute to global biogeochemical cycles when paired with ground based measurements such as the energy, carbon, and water fluxes measured by eddy covariance towers. Networks of such ground based measurements (e.g. FLUXNET, Pastorello et al 2020), where data from hundreds of sites around the world are combined provide temporal records of ground based measurements across ecological gradients. However, effective analysis and utilization of paired ecophysiological and EO data requires both a high quality, homogeneous, and coherent dataset, which in the case of EO data is more than simple extractions of pixels associated with the ground measurements. Here we demonstrate a framework to combine FLUXNET with EO data into ecophysiological data cubes (EDC), where a consistent data structure allows for the rapid and flexible data access needed to be truely Analysis Ready Data, with applications such as the training and evaluation of both machine learning and earth system models as well as furthering ecophysiological understanding.
An example of the power of EDCs is FLUXCOM (Jung et al. 2020), where the consistent data structure allows for the training of data-driven, machine learning models which are able to predict fluxes globally. However, the core of any statistical model based approach is less the underlying machine learning used, but rather the quality of the dataset which they are trained on. The EDCs underlying FLUXCOM utilize high standards of quality control and gap filling procedures, both on the EO and eddy covariance data. The data is then saved in chunked arrays which allows for the rapid access in both spatial and temporal dimensions, a prerequisite for the high frequency reading necessary for truly analysis ready data. The flexible design of the data structures allows for fine control on spatial matching with ground measurements, where measurement footprints range from meters to kilometers in radius from the eddy covariance measurement tower, in order to maximize both data quality and quantity for every measurement site.
While current ECDs have utilized mostly MODIS reflectance and land surface temperature, the next generation will benefit from Sentinel missions, both for improved and higher spatial resolution optical and thermal data, including measurements in the red edge spectral region and new data streams such as solar induced fluorescence which are physiologically informative. As part of the Sen4GPP project, we demonstrate the potential of Sentinel based EDCs and how they can improve the next generation of EO driven energy, carbon, and water fluxes. Going forward, this existing data frameworks can be applied to other ground based measurement networks which have developed in recent years, such as tree sapflow network, SAPFLUXNET.
Jung, M. et al. (2020) ‘Scaling carbon fluxes from eddy covariance sites to globe: synthesis and evaluation of the FLUXCOM approach’, Biogeosciences, doi:10.5194/bg-17-1343-2020
Pastorello, G. et al. (2017) ‘The FLUXNET2015 dataset: The longest record of global carbon, water, and energy fluxes is updated’, Eos, 98. doi:10.1038/s41597-020-0534-3
otbSARCalibration is an important module from the OTB framework. It allows to calibrate a SAR image. Nevertheless, the proposed calibration modes are all dependent of an ellipsoïd as reference. This is not optimal to get correct scene ‘s backscatter values. Some methods, like NORLIM, take into account a DEM as reference but use only the incident angle as a multiplicative factor to get correct radiometry distribution in the calibrated image. The optimal way is to use image normalization to correct native calibrated images. This abstract is about the implementation of David Small’s publication « Flattening Gamma: Radiometric Terrain Correction for SAR Imagery » (2011) using the OTB framework. In such a way, the obtained software is open source and is compatible with any kind of SAR images (GRD or SLC) and can use any kind of DEM (SRTM, Copernicus, etc …). The implemented method goes even further than the publication since the DEM resolution can be lower than the SAR image one. The implementation design is a full processing chain written in python that can be launched from any step. Some parts are inspired by the OTB’s DiapOTB open source software dedicated to differential interferometry applications. All the algorithms are using OTB’s multithreading and streaming capabilities. First, a VRT is built by concatenating all the DEM tiles that intersect the input SAR image to be corrected. The VRT is resampled at least by a factor 2 to compensate the natural loss of resolution when considering a facet as a DEM primitive. Second, the correspondences between map geometry and range Doppler geometry are computed using GCPs included in the SAR product and RPC projection model. Then, the Gamma Area image (Aɣ) computation is performed through two modes. The “normal mode” tries to find the areas of each output SAR image line by computing the intersection with each DEM’s facet. This implementation produces a dense output area image whatever the DEM resolution is. The value of a pixel area is deduced by normalization with the facet’s number of pixels in the SAR image. The “alternate mode” projects each DEM’s facet center directly on the output image. This mode is optimal for high resolution DEM cases in both terms of computation and output image density. In that mode, the shadows are pre-computed while deduced on the fly in the normal one. In all cases, the normal of the slant range plane for area computation is deduced by computing the mean cartesian coordinate of the surface’s points seen by each pixel. The bilinear distribution of the pixel area is extended to the projected facet pixels’ contour. Every facet Gamma Area sharing the same pixel is accumulated by mean of integral. Depending on the output image selected resolution, some multilooking factors can be applied to the appropriate steps. The corrected Gamma Naught RTC image (ɣ0RTC) is then obtained by normalization of the native calibrated image (β0) with the Gamma Area image (Aɣ). Finally, the produced images are resampled to the input SAR image resolution before being ortho-rectified to UTM coordinates in Sentinel 2 geometry. This last operation opens the possibility to calibrate S2 images with the Gamma Naught RTC correction. As validation, some ground truth based SNAP images from ASF datacenter at https://search.asf.alaska.edu are used. Using Copernicus GLO-30 on SAR images from Congo, Pyrénées, Utah or Michigan, the processing time can go from 5 to 25 min on a 40 cores server. At short term, the correction is going to be integrated to the S1Tiling software, to pre-process images before any post scientific analysis.
Index Terms – Gamma Area Normalization, open source, DiapOTB, Gamma Naught RTC, S1Tiling
Earth Observation (EO) from space allows us to create global Soil Moisture (SM) records to understand the effect of changes in global water availability on the environment. Many different EO satellites and retrieval models - for example ESA’s SMOS mission was designed to measure SM over land - collect large amounts of data every day. Therefore the need for rigorous, automated quality assessment procedures stands out among requirements by both the developers and users of EO SM data. While validation standards have been agreed upon in several scientific publications, their application often varies between independent studies; for instance in terms of the defined reference measurements, validation metrics and in the presentation of the results. Together with the complexity of processing large data volumes for global validation efforts, this calls for unified tools to perform this task and provide standardized quality assessments. The Quality Assurance for Soil Moisture (QA4SM, qa4sm.eu) platform bridges the gap between Analysis Ready Data (ARD) production and validation.
QA4SM is built on a powerful computing environment, providing a virtual space where users can freely perform validations of the SM data included in the database. Available data range from single satellite missions (e.g. SMOS, SMAP) or state-of-the-art multi-sensor products (e.g. the European Space Agency Climate Change Initiative, ESA CCI SM) to model or reanalysis data sets (e.g ERA5). In addition, insitu SM measurements provided by the International Soil Moisture Network (ISMN, ismn.earth) are included and regularly updated to allow global comparisons against up to 5 decades of continuous ground measurements. Every validation is personalized by the user through a simple interface to apply spatial and temporal constraints or make use of advanced validation techniques, such as random error characterization of a data set through Triple Collocation Analysis (TCA). All methods included in the core algorithm of QA4SM are based on the best practices and requirements agreed upon by the Global Climate Observing System and the Committee on Earth Observation Satellites. The outcome of each validation, including graphical outputs and validation metric scores, can be stored, further processed or included in scientific studies and reports thanks to the use of traceable Digital Object Identifiers (DOIs).
Recently, QA4SM has seen an involvement in the ESA commissioned Fiducial Reference Measurements for Soil Moisture (FRM4SM) project. The goal is to verify and adapt newly designed FRM protocols and make use of FRM compliant reference data to better characterize errors in satellite SM measurements with QA4SM. Platform developments include product intercomparisons, deeper-level analysis of the results (e.g. using land cover and climate classification) and more precise uncertainty estimation. However, in the interest of meeting the AR objectives of operability and immediacy, the most anticipated capability will consist of the possibility for users to validate their own data sets. While ARD become a new normal in EO, QA4SM sets up to be the reference AR tool for validation across the users and producers community.
Here, we demonstrate how the QA4SM online validation platform operates, what new data and features were recently added and give an outlook on planned developments.
QA4SM was created with support of the Austrian Space Application Program. From 2021 on the service development is also supported by the European Space Agency under the FRM4SM project.
Over the last years, deep learning has become an important component in the Earth observation toolset. Especially the convolutional neural network is the most widely used deep learning model in Earth observation. The supervised optimisation of neural networks relies on large datasets, which are necessary to predict on complex data and train models to be transferable in time and space. In contrast to the efficient processing of large data archives by trained neural networks is their need for large training datasets, which are labour-intensive to create. Another drawback is that only those research questions can be investigated where enough data are available to build datasets large enough to train a deep learning model. In order to solve the problem of labour-intensive data annotation and a potential lack of raw data, we have developed SyntEO, an approach to synthetically generate Earth observation data and corresponding labels simultaneously. This approach specifically addresses the needs of Earth observation data and composes a remote sensing scene with harmonised spatial and temporal order of nested entities.
SyntEO uses an ontology formulated by domain experts to make their knowledge explicit and machine-readable. Upon that ontology, an artificial data generator composes an abstract scene composition that is used to finally generate the synthetic remote sensing scene by adding texture and to derive the corresponding label.
To give an intuitive introduction to SyntEO, we demonstrate the detection of offshore wind farms and their components by using deep learning models that are only trained with synthetic data generated with the new approach.
The resulting deep learning models detect offshore wind farms as well as single offshore wind turbines in real-world remote sensing imagery. The underlying data are IW GRD acquisitions of the Sentinel-1 mission in VH polarisation, which lie within a distance of 200 km of the coastline towards the sea. The trained models are used to detect offshore wind farms and turbines for the entire global coastline at a quarterly frequency between 2016 and 2021. The results are validated by assessing their performance on a hand labelled ground truth dataset which includes all offshore wind turbines in the North Sea Basin and the East Chinese Sea.
Here we present a comprehensive database of atmospheric profiles and surface variables of relevance for Land Surface Temperature (LST) models using Thermal Infrared (TIR) observations. The database was built from the European Center for Medium Range Forecast (ECMWF) version-5 reanalysis (ERA5) dataset. We consider that the use of reanalysis data is a suitable strategy to build a calibration database for LST algorithms/models, since it combines large amounts of historical observations with the most advanced modeling and data assimilation systems. Moreover, they provide a large set of surface and profile variables that are consistent with each other and are available at full spatial and temporal coverage. ERA5 includes atmospheric profiles on 137 levels, allowing a very detailed representation of the atmospheric conditions near the surface. This is of high importance when considering LST retrieval schemes that are based on TIR channels, since most of the TIR signal originates from the lower troposphere.
This calibration database is built by sampling atmospheric profiles of specific humidity and temperature from the ERA5 dataset, using a dissimilarity criterion developed by Chevallier et al. (2000) for the TIGR databases. Other ERA5 variables corresponding to the selected profiles that are relevant to the LST are also included in the database, namely profiles of ozone, 2-meter temperature (t2m), surface pressure, skin temperature (Tskin), total column water vapour (TCWV) and total cloud cover (TCC).
Despite the great advances in surface modelling in the last decades, modelled Tskin still have significant errors. In particular, several authors have pointed out a systematic underestimation of the Tskin in reanalysis datasets (Johannsen et al., 2019; Trigo et al., 2015). Tskin estimates should, therefore, be used with care in the context of algorithm/model calibration, since the errors, in particular the systematic ones leading to an undersampling of the Tskin actual distribution, will be propagated to the LST calibration process and could significantly reduce the quality of the algorithm/model. To reduce the impact of such errors on the database, we complement ERA5 surface information with LST and emissivity estimates from satellite. To reduce the impact of sensor-specific biases and increase spatial and temporal coverage, we consider various LST products. Our strategy is to define an acceptable range of values of LST given the atmospheric conditions, taking as a baseline the estimates of Tskin. This process is not intended to provide the actual values of Tskin corresponding to each profile, but to provide a realistic range of Tskin for the given atmospheric conditions, while increasing the representativeness of the database. Similarly, for the emissivity we take realistic ranges of values based on satellite products obtained for each land cover type.
This work was carried out within the framework of the Satellite Application Facility on Land Surface Analysis (LSA-SAF) with the purpose of creating a training database for the development of LST retrieval algorithms for the next generation of satellites from the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT), the Metop Second Generation and the Meteosat Third Generation. We will also show some applications of the dataset to the development of LST retrieval algorithms/models in the context of the LSA-SAF.
In-situ data is essential for the training and validation of satellite-based crop type and cropland classification algorithms, as well as the validation of the derived maps. Over the last decade, researchers, donors, and governments have emphasized the necessity of FAIR data sharing, standardization, and collaborative re-use of in-situ data for crop monitoring.
Despite this, there is currently no global in-situ crop data repository for agricultural mapping in place. This is mainly due to the difficulty of discovering, gathering, managing, and harmonizing data from many different sources. Furthermore, ethical, legal, and consent-related limits on sharing are a prevalent problem for transnational research projects. Within the European Space Agency (ESA) funded WorldCereal project, we address these problems by building a community-based, open in-situ data repository at global extent. The following activities have been performed to achieve this goal: 1) discover data sharing institutions across the world and make an overview of existing reference data including ground truth observation, classified maps and parcel registrations; 2) develop data curation and harmonization steps for these heterogeneous reference data; 3) asses the quality and fitness for use; 4) provide standardized access to a metadata catalog and the reference data itself; 5) build the trust and long-term relationships with the data sharing institutions.
Our repository contains data from different sources such as the GEOGLAM-JECAM sites (Group on Earth Observations Global Agricultural Monitoring Initiative, Joint Experiment for Crop Assessment and Monitoring), the International Institute for Applied Systems Analysis (IIASA) citizen science platforms (LACO-Wiki and GEO-Wiki), the RadiantEarth ML Hub, the Future Harvest (CGIAR) centers, the National Aeronautics and Space Administration Food Security and Agriculture Program (NASA Harvest), as well as from individual project contributions. These data were collected, harmonized, and annotated, and include information from 2017 onwards. To support a correct and wider use we developed quality scores to assess the spatial, temporal and thematic quality of the data sets. These quality scores can be used to e.g. fine-tune the selection of reference data for training or be used as additional feature data in the classification algorithms.
Although the repository holds potentially millions of data points (e.g. from parcel registrations and classified maps) around 170,000 observations have standardized metadata of which a large share is available to the public. All public data are made available according the data owner license and owners are acknowledged by providing the proper citation. The WorldCereal reference data repository supports the calibration of image classification deep learning algorithms and the validation of Earth Observation generated products, such as global cropland extent and maize and wheat maps. We aim to expanding the repository by means of engaging the sharing community through joint data descriptor manuscript publications, prime-access to data for those sharing and a user-friendly upload service. Sustainability of this repository is being discussed within the GEOGLAM in situ data working group.
One of the main limitations that any Deep Learning practitioner faces nowadays when working with AI algorithms for EO applications is the lack of (labelled) data. On the one hand, some data sources can be difficult to gather due to its availability, accessibility, complexity and/or high cost. On the other hand, even if data is easily accessible, the number of available annotations is usually small. Manually annotating satellite images is a very time-consuming and expensive task, especially when considering large areas with a diverse label ontology usually requiring domain experts, additional data sources (which may pose their own challenges) or even in-situ data gathering campaigns. Once such datasets are built, however, new applications arise with the potential of unlocking EO data and new business opportunities.
In order to overcome this barrier and decrease the difficulty of creating quality datasets, user friendly and efficient labelling tools are key. They should be flexible enough to enable new tasks that go further the typical use cases in computer vision (i.e image classification, object detection and segmentation). Some examples of such tasks include depth estimation, road extraction, 3D reconstruction of urban areas from satellite imagery, and many more. The main features of such tool should include data exploration and access, dataset creation and continuous improvement with versioning, flexible annotation options, quality assurance and discoverability tools.
In this session we will present SCAN: our Satellite Collaborative ANnotaion Tool. Developed by EarthPulse, and used internally to build the datasets to train the neural networks that power our products and services, SCAN is a flexible annotation tool with AI assisted capabilities. This means that the tool can use AI models to suggest labels to new samples instead of having to annotate images from scratch in order to reduce the time required to annotate a scene. The tool unlocks the power of Active Learning to continuously improve datasets and models at the same time. The internal use of the tool persistently demonstrates improvements of 50-75% in the time and cost required to build a quality dataset for a desired task.
During the oral presentation, the access to first external users through ESA NoR will be announced. An open version will be accessible for EO and AI professionals to test and start building their own datasets. They will be able to:
1. Access data locally or remotely (stored in cloud buckets).
2. Create and manage datasets.
3. Create new labels.
4. Annotate images with drawing tools.
5. Add additional data sources to enable easier labelling.
6. Dataset versioning and download for training neural networks.
7. Experiment with AI assisted capabilities.
A preliminary look into the tool can be found at https://youtu.be/0uXUmZnlUgs and its integration with the rest of the EarthPulse ecosystem at https://youtu.be/lueRlaXy2tM.
Deep learning has led to disruptive changes in numerous fields, including Earth observation (Zhu et al. (2017); Ma et al. (2019)). However, a major issue with deep supervised learning is the need for large annotated training datasets. Such annotation is often done manually, leading to time-consuming tasks and preventing scalability. Furthermore, in the specific EO context, it should be conducted by an expert of the studied scene and/or of the used data (e.g. 3D point clouds, Synthetic Aperture Radar, hyperspectral imagery, etc). On top of data complexity, the task may also be complex especially when dealing with temporal data that requires the annotation to be done relatively to each date. To alleviate the need for manually labelled training data, an appealing solution consists in relying on simulated data. In this study, we explore such a strategy in the context of a change detection task into 3D point clouds, a challenging task offering many applications in monitoring of artificial or natural environments.
In order to assess the behaviour of a deep network trained in such a configuration with few labels on the real scene, we conduct the following experimental study. First, we train from scratch the network considering training sets of various size. We then compare the results with those provided by a network pre-trained on a simulated dataset but fine-tuned on the real data. As far as the simulated data are concerned, they are generated by extending Urb3DCD, our 3D point cloud simulator for change detection over urban areas (de Gélis et al., 2021b), to include vegetation and mobile objects, leading to more realistic simulations. The real data are taken from the Actueel Hoogtebestand Nederland (AHN) dataset containing several LiDAR acquisitions over the Netherland (Sande et al. (2010)). We use mono-date labels available in the third and fourth version of the AHN dataset to automatically extract a change annotation. The network architecture considered in our study is the Siamese KPConv network (de Gélis et al., 2021a), that has been recently introduced to deal with change detection and characterization from raw 3D point clouds. It builds upon two recent frameworks: a Siamese architecture, with two branches sharing the same architecture for the encoder that extract features of each input given separately to each branch. The original Siamese network was successfully applied for 2D change detection in EO (Daudt et al. (2018)) and we extend it with the Kernel Point Convolution (KPConv), that adapts the convolution operation to 3D PCs by selecting kernel-points, i.e. points embedded in the specific neighborhood of each convolution operation (Thomas et al. (2019)), achieving high-quality results for semantic segmentation of 3D point clouds. In the Siamese KPConv, data are given to the network in the form of vertically oriented cylinders. While in standard settings, cylinders are randomly selected into the training set according to a weight set on class distribution (since change detection classes are largely unbalanced), we consider here training from scratch by selecting a fix number of cylinders from only 2 to 100 cylinders. To assess the value of using simulated data, we fine-tune the network on the same selected cylinders as for the training from scratch. Notice that classes of change from simulated data and real data are not exactly the same, and we specifically use the Urb3DCD sub-dataset with LiDAR low density.
Experiments show that simulated data are actually relevant to ease the training of a deep network. Indeed, fine-tuning Siamese KPConv pre-trained on simulated data requires only half of the real training data to reach the same level of performance than a network trained from scratch. Adding unsupervised domain adaptation ability would further allow us to avoid the need for real training data.
Daudt, R.C., Le Saux, B., Boulch, A., 2018. Fully convolutional siamese networks for change detection, in: 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE. pp. 4063–4067.
de Gélis, I., Lefèvre, S., Corpetti, T., 2021a. 3D urban change detection with point cloud siamese networks. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 43, 879–886.
de Gélis, I., Lefèvre, S., Corpetti, T., 2021b. Change detection in urban point clouds: An experimental comparison with simulated 3d datasets. Remote Sensing 13, 2629.
Ma, L., Liu, Y., Zhang, X., Ye, Y., Yin, G., Johnson, B.A., 2019. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS journal of photogrammetry and remote sensing 152, 166–177.
Sande, C.V.D., Soudarissanane, S., Khoshelham, K., 2010. Assessment of relative accuracy of ahn-2 laser scanning data using planar features. Sensors 10, 8198–8214.
Thomas, H., Qi, C.R., Deschaud, J.E., Marcotegui, B., Goulette, F., Guibas,L.J., 2019. Kpconv: Flexible and deformable convolution for point clouds, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6411–6420.
Zhu, X.X., Tuia, D., Mou, L., Xia, G.S., Zhang, L., Xu, F., Fraundorfer, F., 2017. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine 5, 8–36.
The current changing paradigm for space‐based Earth Observations (EO), characterised by frequent and systematic satellites’ image acquisitions of the Earth’s surface and atmosphere, along with the full, free, and open access to high-resolution data, has opened up a new exciting area for exploration of satellite imagery to the benefit of many EO applications and all levels of society. In particular, the unique, global and systematic data acquisition of the Copernicus programme Sentinel satellites enables the build-up of long data time series along with large data archives, which combined with the digital revolution driven by Big Data Analytics and Artificial Intelligence (AI), offer an unprecedented opportunity for stimulating widespread scientific research in many domains, including non-traditional fields.
In this context, the subject of unidentified aerial phenomena (UAP) has been of considerable interest throughout the world for over half a century, especially with the prospect that unexplained UAP events would lead to new science questions to explore. Governmental projects such as the French space agency unit GEIPAN-CNES, armed forces investigations like “Project Blue Book” by the United States Air Force or “Operation Saucer” by the Brazilian Air Force, and several citizens’ organisations and scientists, have collected, analysed and archived many UAP reports. However, despite having assembled a wealth of anecdotal accounts, such undertakings have relied on witness testimony for almost all of their data. It is obvious that a rational approach to UAP research requires better collection and analysis of new scientific data with the finest instruments, computers and analytic techniques in a controlled set up.
The report from the Office of the Director of National Intelligence (ODNI) to the US Congress in June 2021, entitled “Preliminary Assessment: Unidentified Aerial Phenomena”, underlined the scientific importance of the UAP phenomena. The ODNI analysis highlighted that the vast majority of the 144 documented UAP sightings by military pilots remained unexplained and probably represented physical objects given that 80 observations were registered across multiple sensors. Furthermore, some UAP appeared to demonstrate advanced technology since 21 events included unusual movement patterns or flight characteristics such as traveling at considerable speed without discernible means of propulsion. The report also recommended a continued study with improvements in data collection and analysis.
For this reason, the quickly evolving scientific and technological environment in the EO sector could represent a decisive opportunity for a scientific experiment to gather objective information on UAP. The fact that the National Aeronautics and Space Administration’s (NASA) newest administrator has publicly stated that the agency was officially joining the effort to better understand UAP is an example of how scientific and technological agencies have begun to address the UAP problem. Therefore, the 2022 Living Planet Symposium provides the opportunity to present and discuss such a potential topic of research, along with the necessary use of AI techniques to analyse large volumes of EO data sets in search of UAP-like phenomena. Rapid advances in modern machine learning algorithms and in particular methods of computer vision and pattern detection, provide us for the first time with modern toolsets for regular monitoring of the Earth for UAP phenomenon and for a measurement of its extent and characteristics. UAP have mostly been studied locally, but now the EO data sets from space with high spatial resolution and temporal frequency provide the unprecedented possibility to measure them on a global scale.
To set the stage, we will discuss a potential application, using an image product acquired by the Sentinel‐2 (S-2) Multi‐Spectral Instrument of the Copernicus Programme. We first give an overview of some unique characteristics exhibited by the UAP phenomenon as underlined in the ODNI assessment that could serve as criteria for searching for UAP in satellite imagery databases. Secondly, using an S-2 image composite, we show that the features of the S-2 sensor would theoretically provide information on some of these characteristics, enabling the estimation of the size, speed and altitude of a moving aerial object appearing in an image. Comparing an object’s attributes with UAP’s unique observables would enable the extraction of aerial anomalies from EOP imagery databases. Finally, we describe some possible spatial and temporal methodologies aimed at facilitating the EO satellites data mining, such as the development of a neural network that could recognize man-made objects (e.g. planes, helicopters, weather balloons), or directly combining citizens’ UAP observations with satellite image data, therefore making such a project a citizen science/professional collaboration. Some other potential scientific benefits could be associated with such research. For example, such techniques could result in detecting and better characterising other known phenomena, including small cometary objects and the like impacting the upper atmosphere, and possibly improve tracking and characterisation of space re-entries.
Machine learning algorithms have demonstrated impressive results for land cover mapping from hyperspectral data. Their generalization capabilities, however, rely in part on high quality training datasets. While labeling remote sensing data is usually very expensive, Active Learning (AL) methods guide the annotation of the training dataset by querying the most informative samples. Starting with a small manually labeled dataset, new data points are iteratively queried from the unlabeled dataset in order to maximize some acquisition function. The choice of the acquisition function is what really distinguish AL methods and we can cast them into three main categories: uncertainty-based AL, representativeness-based AL and performance-based AL.
Here we review state-of-the art active learning methods and bring them under the same framework. We benchmark the Breaking Tie [1], BALD [2], Batch-BALD [3], VAAL [4], Coreset [5], Hierarchical [6] and LAL [7] heuristics on various hyperspectral images. We pinpoint the limitations of those methods in a real use case and introduce a pre-processing routine to handle large datasets. The methods performance are assessed with respect to usual accuracy metrics as well as complementary metrics that allow us to provide guidelines in the choice of a relevant strategy in an operational context.
From our experiments, the representativeness method Coreset [5] and the inter-class uncertainty heuristic Breaking Tie [1] stand out from other methods by yielding a quick increase of accuracy metrics. Nevertheless, the BALD [2] epistemic uncertainty heuristic as well as the representativeness paradigm introduced by [4] demonstrated interesting properties in a real case scenario and a combination of these methods could be considered.
Finally, we release a toolbox to conduct AL experiments and to ease data annotation.
[1] Tong Luo, K. Kramer, S. Samson, A. Remsen, D. B. Goldgof, L. O. Hall, and T. Hopkins, “Active learning to recognize multiple types of plankton,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., vol. 3, 2004, pp. 478–481 Vol.3.
[2] N. Houlsby, F. Huszár, Z. Ghahramani, and M. Lengyel, “Bayesian active learning for classification and preference learning,” 2011.
[3] A. Kirsch, J. van Amersfoort, and Y. Gal, “Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning,” 2019.
[4] S. Sinha, S. Ebrahimi, and T. Darrell, “Variational adversarial active learning,” 2019.
[5] O. Sener and S. Savarese, “Active learning for convolutional neural networks: A core-set approach,” 2018.
[6] S. Dasgupta and D. Hsu, “Hierarchical sampling for active learning,” in Proceedings of the 25th international conference on Machine learning, 2008, pp. 208–215.
[7] K. Konyushkova, R. Sznitman, and P. Fua, “Learning active learning from data,” arXiv preprint arXiv:1703.03365, 2017.
The Copernicus Sentinel-2 mission operated by the European Space Agency (ESA) provides comprehensive and continuous multi-spectral observations of all Earth’s land surface since mid-2015. Clouds and cloud shadows significantly decrease the usability of optical satellite data especially in agricultural applications.
To efficiently use Sentinel-2 data for the various types of analyses, an accurate and reliable cloud mask is needed. However, the existing free and open cloud masks are not accurate enough to be used in fully automatic processing chains. Several areas, which are covered with clouds and cloud shadows are often misclassified as clear areas.
Therefore, the aim of this project was to develop the most accurate free and open AI based cloud mask for Sentinel-2 starting from the Northern European terrestrial summer season conditions.
During the last years, image segmentation techniques have developed rapidly with the exploitation of neural networks capabilities. In this perspective, the KappaMask processor using U-Net architecture was developed with the ability to generate a classification mask over northern latitudes into the following classes: clear, cloud shadow, semi-transparent cloud (thin clouds), cloud, invalid.
For training, a Sentinel-2 dataset covering Northern European terrestrial summer conditions was labelled with focus on April to October months. KappaMask provides a 10 m classification mask for Sentinel-2 Level-2A (L2A) and Level-1C (L1C) products.
The model was compared with Sen2Cor, Fmask, MAJA, S2cloudless on the test dataset. It outperformed all of them by yielding dice coefficient of 80% for KappaMask L2A and 76% for KappaMask L1C. In turn, on the same test dataset, Sen2Cor, MAJA and FMask reached dice coefficients of 59%, 40% and 61% respectively. The closest machine learning open-source cloud classification mask, S2cloudless, achieved 63% dice coefficient providing only cloud and clear classes, while KappaMask L2A with a more complex classification schema, outperformed S2cloudless by 17%.
KappaMask architecture and accuracy assessment results are presented in the research article [1]. Moreover, the labelled dataset, which consists of 4403 labelled 512 x 512 pixels subscenes from 155 S2 L1C products with 10 m resolution distributed over the Northern European terrestrial area [2] has an open access on Zenodo platform.
As the next step KappaMask will be extended to global coverage and all seasons support. To achieve this goal, we are going to further develop the model and extend our existing open-source labelled dataset “Sentinel-2 KappaZeta Cloud and Cloud Shadow Masks” in Zenodo platform. Updated model architecture, reference data set and global accuracy assessment results will be presented.
References
[1] Domnich, M.; Sünter, I.; Trofimov, H.; Wold, O.; Harun, F.; Kostiukhin, A.; Järveoja, M.; Veske, M.; Tamm, T.; Voormansik, K.; Olesk, A.; Boccia, V.; Longepe, N.; Cadau, E.G. KappaMask: AI-Based Cloudmask Processor for Sentinel-2. Remote Sens. 2021, 13, 4100.
https://doi.org/10.3390/rs13204100
[2] “Sentinel-2 KappaZeta Cloud and Cloud Shadow Masks”
https://zenodo.org/record/5095024#.YSTOSo4zaUk
Since the emergence of deep learning and its constant use by the remote sensing community, one of the major problems encountered is the low availability of datasets, either for semantic segmentation, scene classification or object detection applications. Modern deep learning models require a large amount of data to achieve good generalization performance. Moreover, the objects contained in remote sensing data are much more difficult to extract and interpret than in classical photographs and datasets derived from multi-source satellite imagery are still rare despite the interest of researchers for data fusion, especially optical and radar imagery. In this context, Sentinel-GE dataset has been produced, both for scene classification and semantic segmentation. Sentinel-GE consists in the segmentation of the Grand-Est region into a pair of Sentinel-2 and Sentinel-1 patches with a spatial resolution of 10 meters. Sentinel-2 satellite data are downloaded from the Theia/Muscate database (https://www.theia-land.fr/) and 10 spectral bands are available for each sensor. Sentinel-1 satellite data are downloaded and preprocessed using the s1tiling processing chain developed by CNES. The retained products are acquired in Ground Range Detected (GRD) and Interferometric Wide (IW) swath mode with dual VV+VH polarization. Each patch is 256 x 256 pixels and a reference data, BDOCSGE2©GeoGrandEst produced by the Grand-Est region in France, is resampled and reworked in order to be in agreement with the satellite spatial resolution. BDOCSGE2©GeoGrandEst is produced by visual interpretation of aerial photography and is declined in five levels of class for urban areas and four levels of class for natural areas. In total, this represents 53 LULC classes over the region. The roads proposed by this first reference data are not consistent at 10m spatial resolution. Thus, in order to maximize this consistency at 10m spatial resolution, a second vector database (BDTOPO©IGN) was used for the extraction of main roads (primary and secondary roads).
The reference data in vector format was pre-processed in 5 steps to provide a final result in 14 land use classes, 5 classes for urban areas and 9 classes for natural areas for the entire Grand Est region at 10 meters spatial resolution. Once the reference data has been pre-processed, it is a matter of slicing the patches on the satellite data and the rasterized reference data to form multi-temporal optical/radar pairs of patches. The ten Sentinel-2 bands and the two Sentinel-1 bands (vv and vh) are each stacked to form an optical patch and a radar patch of the same footprint. The last step removes the overlapping patches between tiles to have a spatial independence of each of them so that users can separate their training, validation and test datasets for their machine learning models.
In addition to the Sentinel-1/Sentinel-2 patch pairs and the reference data patches, a JSON file is produced for each patch and contains several informations such as the projection (wkt), the land use land cover (LULC) classes present in the patch for scene classification, the name of the corresponding patches for Sentinel-1 and Sentinel-2.
Currently in production, several Convolutional Neural Networks (CNN) architectures are ongoing to evaluate and to test Sentinel-GE dataset for semantic segmentation and scene classification. Furthermore, others semantic segmentation networls will be applied to evaluate the contribution of multi-temporal and multi-source imagery on two thematic applications : urban areas (1) where the multi-temporal dimension is less significant but the combination of two data sources is important and improves the classification results and grasslands (2) where the temporal dimension is at first sight more significant. This research is part of the PhD Thesis supported by the French funded project ANR TIMES ‘High-performance processing techniques for mapping and monitoring environmental changes from massive, heterogeneous and high frequency data times series’ (ANR-17-CE23-0015) and by the French TOSCA project AIM\-CEE (CNES, 2019-2022).
At short term, the dataset will be freely accessible and downloadable for the reserach communities through a unique link on our dissemination plateform (A2S - https://a2s-earthobservation.eu) and the urban Scientific Expertise Centre of THEIA.
ExtremeEarth is a European H2020 project; it aims at developing analytics techniques and technologies that combine Copernicus satellite data with information and knowledge extraction, and exploiting them on ESA’s Food Security and Polar Thematic Exploitation Platforms. The current publication focuses on the Polar case for which a large training dataset has been generated and demonstrates the use of different machine learning/deep learning techniques (e.g., Compression-based pattern recognition, Cascaded learning for semantic labelling, Explainable AI for SAR sea-ice content discovery, Physics-aware deep hybrid architecture).
The solution proposed in the project is an active learning approach that represents a simple way to generate semantically annotated datasets from given Sentinel-1/Sentinel-2 images. Active Learning is a form of supervised machine learning in which the learning algorithm is able to interactively query some (human) information source to obtain the desired image classification outputs at new data points. The key idea behind active learning is that a machine learning algorithm (e.g., a Support Vector Machine) can achieve greater accuracy with fewer training labels if it’s allowed to choose itself among the data from which it learns. In this case, relevance feedback is included; this supports users to search for additional images of interest in a large repository. Further, any new image content classes that do not exist yet can be defined by expert users based on their specific knowledge; however, different users can give different meaning to the new classes. In this case, the number of semantic classes that can be extracted with the proposed active learning method is not fixed (as for many current state-of-the-art classification methods), but the classes are defined interactively by the users. In our case, we apply our active learning scheme to sequences of pre-selected satellite images. The idea behind this active learning approach is to obtain high classification accuracies (between 85% and 90%, depending on the given class) with very few training examples.
The landscape of the arctic is rapidly changing, posing a potential disruption to the current maritime markets. The arctic has since the late twentieth century been warming by twice the rate of the global average, which has caused a rapid melting of the arctic sea ice and made Arctic marine routes, that historically were covered by sea ice, navigable for part of the year. These routes could trigger a new paradigm in the global shipping industry that sees traditional routes via the major canals (e.g. Suez Canal, Panama Canal) be replaced with shorter routes through the Arctic waters. The North West Passage is the most direct shipping route between the Atlantic and Pacific Oceans and in 2017 a Russian tanker sailed through the arctic ocean for the first time without assistance of icebreakers. The increased activity in the arctic facilitates a need for improved monitoring of these waters for both ships and icebergs in order to:
- Decrease illegal and unregulated fishing
- Increase maritime safety
- Defend vulnerable marine habitats
- Empower sovereignty of arctic nations
Larger ships must identify themselves by ship transponder systems such as the Automatic Identification System (AIS). However, ship transmissions can experience a multitude of errors. In high traffic areas signals are frequently lost in data collisions. At higher latitudes where the AIS receivers are especially sparse the messages experience temporal gaps and can be days old. The signals may also be turned off by accident or deliberately. Dark ships are non-cooperative vessels that do not transmit AIS signals. These ships pose a risk for marine traffic safety, and may be involved in criminal activities such as piracy, smuggling and Illegal, Unreported and Unregulated (IUU) fishing, oil spills, trespassing, etc. Recently, nearly 100 warships were found to have faked their own AIS signal by spoofing.
Satellite imagery allows for the possibility of detecting ships at sea complementary to AIS signal transmission. The Sentinel-1 satellites provide freely available Synthetic Aperture Radar (SAR) imagery with resolutions down to 22m in two dual polarizations HH+HV or VV+VH. Their revisit time in the arctic is almost daily and have the ability to see through clouds day and night. The arctic waters contain not only ships but also abundant icebergs. Correct discrimination is vital for identifying ships including Dark ships in the Arctic, and at the same time reducing false alarms from icebergs. However, the arctic regions contain few ships and are predominantly recorded in the HH+HV polarizations while areas with high marine traffic are recorded in the VV+VH polarization. These polarizations are fundamentally different and as such VV+VH ship datasets cannot be used to discriminate ships from icebergs.
We apply a continuous wavelet transform based ship detection algorithm to 200 HH+HV polarized sentinel-1 SAR images. AIS signals were interpolated to the time of the image recording providing a true positive label for the detected ships. The detection algorithm was then tuned to increase the number of true positive ships detected. Applying the trained detector to SAR images of areas with high marine traffic and arctic areas with many icebergs allowed us to create a large database of ships and icebergs images. The database contains more than twenty thousand images of ships and icebergs made explicitly for machine learning. This database is not only large in terms of the number of ships but also the first of its kind to also include icebergs.
The database of ships and icebergs were used to train several different deep neural network models to solve the binary classification problem of discriminating between images of ships and icebergs. The best performing model achieved a higher than 90% validation accuracy and was used to predict on a test dataset constructed from the true positive ships sailing in arctic iceberg infested waters. The resulting test accuracy of ~78% was slightly lower than the validation accuracy achieved when training the network. The drop in accuracy could be attributed to images containing both ship and icebergs providing a basis for going beyond binary classification of ships and icebergs.
Field boundaries are at the core of many agricultural applications and is a key enabler for operational monitoring of agricultural production to support food security. Recent scientific progress on deep learning methods has highlighted the capacity to extract field boundaries from satellite and aerial images with a clear improvement from object-based image analysis (e.g. multiresolution segmentation) or conventional filters (e.g. Sobel filters). However, these methods need labels to be trained. So far, no benchmark dataset exists to easily achieve this comparison. Absence of such benchmark data further impedes proper comparison with existing methods. Besides, there is no consensus on which evaluation metrics should be reported (both at the pixel and field levels). As a result, it is currently impossible to compare and benchmark new and existing methods.
To fill these gaps, we propose the AI4Boundaries, an AI-ready dataset (i.e. labels and images) for field boundary detection to facilitate model development and comparison with three specific datasets:
1) a 10-m single-date Sentinel-2 composite for large-scale, near real-time application such as crop mapping,
2) a 10-m Sentinel-2 monthly composites for large-scale analyses in retrospect, and
3) a 1-m orthophoto dataset for regional-scale analyses such as the automatic extraction of Geospatial Aid Application (GSAA).
All labels have been sourced from GSAA data that have been made openly available.
Public parcel delineation data are first obtained over several countries/regions over Europe: Austria, Catalonia, France, Luxembourg, the Netherlands, Slovenia, and Sweden for the year 2019. After drawing a regular grid of 4 by 4 km cells in the ETRS89-extended LAEA Europe projection, a stratified random sampling is drawn based on the perimeter/area ratio and the area covered by parcels, thus taking into account the diversity of the agricultural landscapes. The resulting “AI4Boundaries” dataset consists of 7,831 samples of 256 by 256 pixels for the 3-specific datasets along with the corresponding ground-truth parcel delineation.
Besides providing this open dataset to foster computer vision developments of parcel delineation methods, we discuss perspectives and limitations of the dataset for various type of applications in the agriculture domain.
LabelCooker: an efficient and scalable approach to AI training labels generation from geospatial vector data
Over the last decade, the amount of available earth observation data has grown massively. We are experiencing an abundance of satellite imagery resources that can unlock valuable insights into our planet. In order to take advantage of this flow of earth observation data, many remote sensing applications use advanced AI techniques. However, this promising technology comes with a cost. Training reliable AI models often require massive computing power and a large collection of high-quality labels. For many supervised learning applications, the satellite imagery needs to be paired with geospatial labels: bounding boxes for object detection, raster masks for image segmentation and sometimes both for semantic segmentation. Using open vector databases like OpenStreetMap or IGN BD Topo can be a good solution for satellite imagery labeling. But even with these databases, generating high-quality labels can still be a challenge for researchers and data scientists. For a given area of interest different label sources can co-exist and it becomes difficult to choose the right source for an application or an image. Moreover, these databases can be outdated or incorrect. Since the images used for AI training are rarely the ones used for database creation, a perfect match between the objects visible on the image and the objects in the databases cannot be guaranteed. A destroyed building can be absent from the satellite image and still indexed on OpenStreetMap.
To tackle this problem and many more we introduce LabelCooker, a cloud-based approach to geospatial training labels generation that gives the ability to select the best labels from different data sources effortlessly and at scale. The creation of this software has been motivated by the project AI4GEO which aims at producing large scale very high-resolution land cover maps and 3D reconstruction models over urban areas. Deep Learning models have been extensively used by the AI4GEO team to extract semantic information from remote sensing data. Therefore, there was a critical need of gathering existing massive ground truth datasets, denoted GT in this abstract, spread over the regions of interest. In addition to having a unique endpoint to access effectively those GT; LabelCooker proposes a unifying hierarchical semantic nomenclature, some useful merging strategies and geometric corrections to fit optimally the semantic information with the given user remote sensing image.
LabelCooker is a simple, yet powerful tool based on PostgreSQL and python. It is composed of a SQL database where collected vector data can be stored, harmonized and made available through several APIs. For each instance of LabelCooker, a semantic hierarchical nomenclature is established, based on this reference different data sources can be compared, analyzed and merged.
Discrepancies on training data can cause underperformances of Deep learning models. LabelCooker contains several tools that allow analysis, comparisons and corrections over available datasets and raster sources.
Registration functionalities help the user correct possible geometric offsets between the remote sensing image and the GT. Specifically for building GT, an iterative algorithm based on a Line Segment Detector tries to find the best geometric transformation to apply on the GT polygon. This ground truth polygon is shifted to better fit the building visible on the image. Another more generic approach relies on foreground/background segmentation using the GrabCut algorithm. The foreground segmentations extracted from the remote sensing image are then used to correct the GT polygons, by maximizing the overlap between each foreground segmentation and the corresponding vector polygon.
Another aspect of training labels generation is combining different data sources. For the areas where multiple data sources are available on the LabelCooker database, the user can choose one source over another or create a better version by using some of the merging strategies at hand. Some of these data fusion methods can even take an auxiliary raster image (NDVI, DSM, ..) to guide the merging process. The most relevant strategy depends on the data scientist’s a priori about the quality of the different components as much as the data that is available. For example, we can simply complete a reference database like IGN BD Topo with a more up-to-date collaborative data source like OpenStreetMap. This is the “completion” method. More fusion methods are available and can be easily added to the software.
In order to help the data scientist explore the quality of its data, Labelcooker incorporates statistics methods both for single dataset exploration and multiple datasets comparisons. Those computations are cloud-performed and paralyzed, allowing them to be scalable. Different export formats such as raster masks, Microsoft COCO or many other common vector files formats are supported.
LabelCooker is designed to be integrated in data science workflows, whether it is to evaluate predictions produced by new machine learning algorithms, or to feed AI models with high-quality data coming from merged datasets. In addition to simplifying the use of multiple data sources, the goal of LabelCooker is to promote collaborative work, bring together data science teams and push forward AI within the geospatial community.
Cloud masking is a critical step in pre-processing of optical images, for land, water and atmosphere applications. Traditional cloud masking has been done using explicit radiometric tests exploiting physical features such as brightness, whiteness, temperature or specific absorption lines [Ackermann et al, 2010], or spatial features (in/homogeneity) [Eumetsat, 2015], and more or less sophisticated combination of these. Machine learning (ML) is also used since a long time to train a classifier for the purpose of predicting the presence of clouds [Camps-Valls et al 2004]. In recent years ML methods in combination with the now available computation power and easy access to large EO datasets on platforms such as the DIAS’ses, Google Earth Engine or Sentinel Hub, have improved ML based cloud masking significantly [e.g. Zupanc, 2019].
In 2020/2021 under the umbrella of CEOS, ESA and NASA have conducted the Cloud Masking Intercomparison eXercise (CMIX) [CEOS 2019]. One of the major results of this exercise is that “shortcomings and limitations in used reference datasets had been identified“ because each reference datasets only covers a certain aspect of cloud masking. In the context of CMIX, the reference datasets refer to validation datasets. However, this is equally relevant for the construction of training datasets: How is a cloud defined? How are training data collected? Which ranges of other environmental parameters are covered? Furthermore, the CMIX has shown that publicly available datasets have the issue of often being used for training and validation at the same time or being designed for training and then being used for validation, and vice versa.
The requirements for ML training data go even further and differ in some respect from requirements for validation data. For example, an ML algorithm may have the requirement to retrieve different cloud classes (cirrus, optically thick cloud, optically thin clouds, …) with the same uncertainty which implies equal distribution of these classes in the training datasets, while validation may require representativity of natural frequency distribution.
Eventually, any EO algorithm is an inversion of the radiative transfer (RT) equation, by whatever means. In RT the environmental conditions (such as surface albedo, surface height, cloud, and aerosol optical properties) uniquely determine the radiance field. However, the inversion can have multiple (valid) solutions, it is ambiguous. ML techniques cannot cope with this fact without “helping” them: they would select – more or less randomly – one of the valid solutions. Thus, it is important to either restrict the training dataset and avoid ambiguities, or to include constraints in the ML procedure to add additional knowledge into the training process.
In this presentation we will elaborate (a) the importance of training data for the success of the ML method and (b) give examples of restrictions, ambiguities, and possible ways to overcome this issue. We will discuss examples of manually selected training data which is used in our PixBox methodology and conclude with a set of general requirements to be respected when collecting a training dataset for ML for the purpose of cloud masking.
Ackerman, Frey, Strabala, Liu, Gumley, Baum, and Menzel, 2010: Discriminating Clear-Sky from Cloud
with MODIS - Algorithm Theoretical Basis Document
Eumetsat, 2015: MTG-FCI: ATBD for Cloud Mask and Cloud Analysis Product. https://www.eumetsat.int/media/37764
Camps-Valls, Gómez-Chova, Calpe-Maravilla, J. D. Martín-Guerrero, E. Soria-Olivas, L. Alonso-Chordá, José Moreno, 2004: "Robust Support Vector Method for Hyperspectral Data Classification and Knowledge Discovery". IEEE Transactions on Geoscience and Remote Sensing, Vol.42, Issue 7, pp. 1530-1542.
CEOS, 2019: CEOS-WGCV ACIX II CMIX Atmospheric Correction Inter-comparison Exercise Cloud Masking Inter-comparison Exercise; https://earth.esa.int/eogateway/events/ceos-wgcv-acix-ii-cmix-atmospheric-correction-inter-comparison-exercise-cloud-masking-inter-comparison-exercise-2nd-workshop?text=cmix
Zupanc, 2019: Improving Cloud Detection with Machine Learning. https://medium.com/sentinel-hub/improving-cloud-detection-with-machine-learning-c09dc5d7cf13
Deep Learning methods, specifically Convonlutional Neural Networks(CNN) provide unprecedented opportunities for remote sensing-based vegetation assessments. A series of studies has shown that CNNs accurately predict plant species and communities in high resolution remote sensing data, in particular with data at the centimetre scale acquired with Unmanned aerial vehicles (UAV). However, such tasks require ample training data to generate transferable CNN models. Reference data are commonly generated via geocoded in-situ observations or labeling of remote sensing data through visual interpretation. Both approaches are commonly laborious and can present a critical bottleneck for CNN applications. At the same time, our knowledge about the appearance of plant species is constantly growing, namely in the form of millions of freely accessible plant photographs with associated species names on the web. A prominent example in this regard is the iNaturalist database, which motivates countless of citizens to record photographs of the World's flora and fauna and annotate the species. Such crowd-sourced plant photographs are expected to be very heterogeneous (e.g., in terms of the quality and acquisitions settings), and often show a different perspective compared to typical bird-perspective of remote sensing data. Still, crowd-sourced plant photographs could be a valuable source to overcome the challenge of limited training data and reduce the efforts for field data collection and data labeling. Here, we explore the potential of such a data treasure for weakly supervised learning in the remote sensing context. Therefore, we investigate, firstly, if we can use crowd-sourced photographs to train CNN models and map plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance of such a weakly-supervised learning approach can be increased by pre-selecting photographs that share a more similar perspective to the remote sensing data. Therefore, we used two case studies in which we test our proposed approach with multiple RGB orthoimages acquired from UAV with a spatial resolution ranging from 0.3 to 119 centimetres. The first case study aims to map Fallopia japonica (F. japonica), which is known as invasive species in Central Europe. The second case study aims to map Portulacaria afra (P. afra) which is a key species in the context of countering desertification and ecosystem restoration in Africa. For training the CNN models, we queried the iNaturalist database for photographs of the target species and the surrounding species that are expected in the areas of each case study. We trained CNN models with a ResNet-50 backbone. For applying these models based on the crowd-sourced data to the remote sensing imagery, we used a sliding window approach with 10% overlap. The individual sliding-widow-based predictions were spatially aggregated in order to create a high resolution classification map. We tested this approach with and without filtering photographs prior to training by their estimated acquisition properties. In view of the heterogeneity of crowd-sourced photographs, we test to filter photographs taken from distances or with angles which may hence not facilitate the CNN-based detection of the target species in the remote sensing imagery. To estimate the acquisition properties, we visually labeled the acquisition distance and angle of 4500 photographs and trained CNN regression models to predict the angle and distance for all of our training photographs. Based on these predictions, we tested to filter the training data with different thresholds for acquisition distance and angle. The results of the two case studies demonstrate that CNN models trained with crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by photograph acquisition properties increased the predictive performance. Based on our test results, we filtered training data by distance in which we excluded photographs with a distance of fewer than 0.5 m as well as photographs with a distance of more than 50 m before training the model. In the case study F. japonica the precision of the base models ranged from 0.75 to 0.88, recall from 0.9 to 0.92, and F1 between 0.81 to 0.88. In the case study of P. afra, the precision of the base models ranged from 0.24 to 0.71, recall from 0.29 to 0.91, and F1 scores between from 0.5 to 0.82. Additionally, lower accuracy (e.g., F1 scores below 0.5) was observed for UAV orhtoimages of lower quality in terms of illumination condition, suggesting that the orthoimage quality is critical for the applicability of this approach. The higher accuracy for the case study on F. japonica compared to the case study of P. afra, may have resulted from the fact that the respective UAV image resolution was considerably higher (average ground sampling distance of 0.3 and 0.9 cm, respectively). Another explanation is that morphological canopy properties of F. japonica show a higher contrast to those of the surrounding species than was observed for the case study on P. afra. Our study demonstrates the potential of crowd-sourced training data and CNNs for mapping plant species in high-resolution remote sensing imagery. Thereby, the remote sensing data image resolution and quality as well as the contrast of the target species to the surrounding species appear to be critical factors for the success of this approach. Overall, this study demonstrates that freely available knowledge, in form of plant photographs and species annotations in open databases, can effectively anticipate a common bottleneck for vegetation assessments. This study also provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications.
Hyperspectral sensors are becoming central to many fields of research in Earth Observation. In particular, the launch and ongoing success of the PRISMA mission has given the remote sensing community access to imagery with a nominal resolution of 30 m/pixel, and a spectral resolution of better than 12 nm between 400--2500 nm.
Machine learning approaches are well-suited for applications involving hyperspectral imagery, because of the high dimensionality of the input space. As ever, for supervised machine learning approaches to be successful, there is an urgent need for labelled datasets with which to train and test models. Indeed, despite advances in unsupervised, self-supervised and semi-supervised learning, there is still a requirement for labelled data to validate the algorithms.
This work documents an ongoing effort to produce a large, openly available, labelled dataset with PRISMA. The dataset will be released as a collection of PRISMA scenes, with their associated labelled cloud masks. Annotations are made using Intelligently Reinforced Image Segmentation (IRIS) [1]. IRIS is an open-source, semi-automated annotation tool that we have developed specifically for segmentation of remote sensing data. The tool is configurable, allowing users to easily visualise any kind of rasterised data in a number of ways (see Figure). IRIS uses a Random Forest to quickly learn how to annotate an image from a few pixels labelled by the user. This can be iteratively improved with further annotations, until the user is satisfied with the mask. Having been previously used to create a Sentinel-2 cloud mask catalogue [2], we now apply it to PRISMA data.
To add flexibility for users of the dataset, seven classes are included: land, water, snow/ice, thin cloud, thick cloud, cloud shadow, and terrain shadow. The two shadow classes are effectively spectrally indistinguishable, and are therefore annotated in IRIS as one class. However, by using post-processing based on geometrical constraints and Digital Terrain Models, we can recover which shadow pixels are caused by cloud and which are caused by terrain.
Cloud and cloud shadow masking is an inherently ambiguous problem, given that clouds can be arbitrarily thin, and there edges poorly defined. By employing a multi-annotator approach, we are able to provide a measure of per-pixel inter-annotator agreement which can be used as a proxy for the label confidence at each pixel.
This dataset will be the first openly available dataset of cloud masks for PRISMA, and opens the door to researchers to easily train, test and evaluate cloud mask models for PRISMA. In addition---given that many multispectral instruments can be simulated by combining different hyperspectral bands---this is a valuable resource for many other sensors with bands in the visible, near infrared, and shortwave infrared.
In the future, this dataset can be further extended with more annotations, adding both more images and more labels per image. The methods used are problem agnostic, and therefore the same approach can easily be repurposed for other segmentation task on PRISMA, or any raster input data. We believe this model of dataset creation could provide a useful template for other researchers, and we invite collaboration to expand this dataset, to create others, and to extend IRIS’s capabilities.
[1] https://github.com/ESA-PhiLab/iris
[2] Francis, Alistair, Mrziglod, John, Sidiropoulos, Panagiotis, & Muller, Jan-Peter. (2020). Sentinel-2 Cloud Mask Catalogue [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4172871
Very large high dimensional data are common for machine learning applications in EO and they impose new challenges to data-driven and data-intensive algorithms.
This work presents a systematic and comprehensive approach for optimally handling classification and regression tasks with very large high dimensional data. The proposed approach is based on smart sampling techniques for minimizing the number of samples to be generated by using an iterative approach that creates new sample sets until the input and output space of the function to be approximated are optimally covered.
The smart sampling techniques can be easily used in practical applications and scale well in the case of extremely large data. This technique is being successfully used in several operational and research retrieval algorithms for the Sentinel missions.
AI-based image processing systems like Artificial Neural Networks (ANN) have recently proved their value in the field of computer vision, and, in most domains, outperformed traditional systems. In particular, Convolutional Neural Networks (CNN) became a ubiquitous tool in today’s computer vision landscape. Their ability to learn complex and unknown mapping functions between raw input data and output values makes them powerful tools to solve problems where hand-engineered features and descriptors and their relations to the task a is not obvious. Though historically mostly developed for consumer applications on Earth, CNN are now commonly used for earth observation, but the prohibitive cost of acquiring enough training data prevents to effectively harness their full potential. Thus, being able to efficiently produce large amount of annotated data has become a significant challenge in the community.
In this work, we present a semi-automated labelling method. It relies on a model continuously retrained on a given context through a closed loop of inferences and human corrections. Although prior efforts were already dedicated to annotation systems interweaving intermediate segmentation models and manual steps to ease the burden of drawing complex polygons with near pixel-perfect precision, few explicitly studied whether annotator/model feedback patterns could improve both the labelling process quality and efficiency. By leveraging a complete inference framework including domain-driven post-processing and vectorization in a web-application, the annotation system is able to provide relevant label suggestions, which in turn are corrected by a human oracle and sent back to the segmentation model for fine-tuning. After a few iterations, the model is effectively specialized can be used to generate labels on the entire tile. Our assumption is that the resulting temporary model is over fitted but provides very accurate labels on a small portion of the image. The process is to perform this task on a set of small tiles of the image in order to get a training dataset that will be used to train the production model. In a second time this production model is used to infer prediction on the whole image with very good results.
Our proposed system is evaluated in the context of building LoD0 segmentation and LandUse maps on Pleiades imagery. In particular, the ability to efficiently generate precise training dataset over fully manual labels and the quality of the final product (LoD0 and LandUse map) inferred by a model trained on both datasets (semi-automatic and manual). Current results and possible path of improvements are also presented to inspire future works on the subject.
Wildfires are leading to more and more damages across the globe. As deep learning advances in remote sensing, large-scale satellite image dataset is of critical importance to promote the development of wildfire monitoring methodologies and applications. Both SAR and optical sensors have their unique advantages in monitoring wildfires: SAR signals can penetrate clouds and smoke, while optical data could provide rich spectral information of the burned areas. Therefore, the objective of this study is to establish a multi-source Sentinel-1/2 wildfire dataset that includes Sentinel-1 and Sentinel-2 pre/post-fire images to conduct a preliminary evaluation for wildfire detection using deep learning.
Google Earth Engine [1] was taken as the main source of satellite image data since it provides ready-to-use Sentinel-1 GRD and Sentinel-2 TOA data, while the 2017-2019 Canada wildfire perimeter polygons were rasterized as ground truth masks. Specifically, for a given wildfire event, we obtained the fire period information from the official wildfire database, such as start date and end date, and derive the region of interest (ROI). For each Sentinel-1 orbit, the images acquired in the same period of the year before fire happened were averaged as the pre-fire image, while the average image of all images acquired within 2 months after the end date was taken as the post-fire image. As for Sentinel-2, in order to get cloud-free images, we first estimated the cloud rate of all tiles that intersect with the ROI, filtered all tiles that had a roi cloud rate less than 10%. And then the median image of all filtered tiles acquired in the same period of the year before fire year were taken as pre-fire image, while those acquired within 2 months after fire ended as post-fire image. Additionally, visual inspection was conducted to exclude poor samples by converting pre/post images of both Sentinel-1 and Sentinel-2 into visually perceivable png format. The 2017 and 2018 wildfire data were used as training/validation sets, while the 2019 wildfires were used as testing set.
After creating this multi-source dataset, we conducted a preliminary evaluation and comparison among several classical deep learning-based change detection architectures, such as Vanilla UNet with early fusion (EF) (UNet-EF), Siamese UNet with feature concatenation (SiamUNet-conc), Siamese UNet with differentiated features (SiamUNet-diff), and UNet with ResNet-18 encoder (Res18-UNet-EF) [2]. No weight-sharing was applied to the two Siamese architectures. Based on pre/post Sentinel-2 optical images, all these four architectures can achieve an IoU score of about 93% and no significant difference was observed. As for Sentinel-1, Res18-UNet-EF achieved the highest IoU score 88.8% while SiamUNet-diff obtained the lowest IoU score 83.4% among those four. UNet-EF and SiamUnet-conc reached 86.4% and 85.9% respectively. There is still a significant gap between Sentinel-1 and Sentinel-2 based results, further investigation will be conducted to reduce the gap.
In the field of Earth Observation (EO), Change Detection (CD) is receiving more and more interest within the scientific community. CD allows the assessment of changes that have occurred at ground level and, for this reason, it is applied to different real-world tasks, such as identification of urban changes and soil consumption, natural and anthropogenic landscape morphological evolution, agriculture and forestry management, natural disaster management, and so on [1].
Due to the rapid technological development of EO sensors, it is possible to get even more and more details at ground level and, consequently, retrieve increasingly precise maps of change, useful for the aforementioned tasks.
In this context, in the last few years, Deep Learning (DL) has been progressively considered to replace traditional methods for performing semantic segmentation, object and change detection. Thus, the development of DL architectures able to automatically assess land use changes from images of different time frames has assumed a fundamental role. Several scientific studies have led to the development of algorithms capable of responding to these needs, but, however, these algorithms are focused on the generation of two-dimensional change maps, therefore able only to assess changes in land cover or land use. Particularly, various DL architectures have been used for this purpose, such as CDNet [2] and Siamese networks [3].
This is due to the fact that the creation of CD maps in two dimensions (that is 2D CD) has the advantage of not needing any information about the elevation of the area of interest. In fact, even if LiDAR data can be used as a basis for the elevation labels of the dataset, these data are not always freely available.
Our work proposes a further step forward, creating and sharing a dataset in which also the elevation of the changes is provided, consequently structuring the CD no longer only on the change of land use and land cover, but also on the identification of areas in which a change in elevation has occurred. The intention is to develop an algorithm capable of generating three-dimensional CD maps using a pair of aerial optical images as input.
To achieve these goals, a specific dataset containing 493 pairs of optical images, with the corresponding 2D and 3D CD maps, of the urban area of Valladolid in Spain was created (using the products freely available at [4]) with the final aim to make it available for supervised DL training. The study area includes the historical and urban center of the city and the commercial areas in the surroundings, excluding agricultural areas where there are no significant elevation changes. Particularly, to build our dataset, we started from 493 couples of aerial 800x800 orthophoto with a Ground Sample Distance (GSD) equal to 0.25 m covering the same area and collected in two different years (2010 and 2017) and the respective DSMs, created from las files obtained from LIDAR flights for the years 2010 and 2017, of size 200x200 pixels with a GSD equal to 1 m. For each tile, two CD maps have been produced, starting from the difference between the two DSMs. The first is a two-dimensional map showing the areas where a change in elevation has occurred.
Particularly, four classes have been identified: construction of buildings; demolition of buildings; increase in ground height; decrease in ground height. These 2D CD maps have a resolution of 800x800 pixels, with a GSD of 0.25 m. The second CD map is three-dimensional, with 200x200 pixel resolution, in which each pixel contains the elevation difference registered between the year 2010 and the year 2017. In particular, after the first automatic phase for the tile cropping and the generation of the elevation difference map, a manual step has been performed on each tile, where only the areas experiencing a real change in elevation, verified by comparison with the orthophotos, were selected, while the elevation value of the pixels present in the tile where no changes were found were set equal to zero.
On this dataset, two kinds of algorithms can be applied. On one hand, already developed 2D CD algorithms, such as [2-3], have been used to provide baseline benchmark metrics, with respect to other methods and datasets in literature. Furthermore, a 3D CD algorithm is currently being developed. This will make it possible to obtain, automatically from two optical images, three-dimensional CD maps without the need of relying on elevation data. This opens up the possibility of using CD applications also in other fields that have not been approached so far, such as the monitoring of urban development, of glacier melting, or of other natural phenomena.
For these reasons, our work can be part of an important innovation in the field of Artificial Intelligence (AI) applied to Remote Sensing (RS) data. It is intended to release the free and open dataset as soon as possible.
[1] Huang, X.; Zhang, L.P.; Zhu, T.T. Building Change Detection From Multitemporal High-Resolution Remotely Sensed Images Based on a Morphological Building Index. IEEE J. Stars 2014, 7, 105-115.
[2] Alcantarilla, P.F., Stent, S., Ros, G. et al. Street-view change detection with deconvolutional networks. Auton Robot 42, 1301–1322 (2018). https://doi.org/10.1007/s10514-018-9734-5
[3] R. Caye Daudt, B. Le Saux and A. Boulch, "Fully Convolutional Siamese Networks for Change Detection," 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, pp. 4063-4067, doi: 10.1109/ICIP.2018.8451652
[4] Organismo Autónomo Centro Nacional de Información Geográfica (CNIG).
Digital Elevation Models and Maps in image format. Available online at: http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalogo.do?codFamilia=LIDAR#
Reliable training data is a fundamental requirement for artificial intelligence (AI). Up to now, the data preparation for AI approaches has required extensive human effort. In the TreeSatAI project, we are exploring numerous alternative sources for training data which could serve as reliable input data for AI in forest applications. The mining of high-quality samples, rather than collecting high numbers of lower-quality training data, is a central bottleneck for AI systems, especially in the field of satellite-based Earth observation (EO).
Currently, the acquisition of high-quality training data is an almost unsolved problem for AI applications, often leading to lower applicability to larger spatial domains. Increasing volumes of free remote sensing data from different space missions as well as open source environmental geodata are increasingly available. The boom of social media platforms such as Flickr and OpenStreetMap open up new possibilities to attain textual and visual information on the environment. Mobile apps for image recognition of plant species like pl@ntNet and Flora Incognita have attracted significant attention on the provision of potential geo-tagged training samples. Nonetheless, ground truth data from forest inventories by public authorities and long-term monitoring projects like LUCAS and BExIS are most likely still the gold standard for training data sources in EO forest applications.
The overall aim of the TreeSatAI project is the development of AI methods for the monitoring of forests and woody features on a local, regional, and European scale. Based on freely available geodata from different sources (e.g., remote sensing, administration, and social media), prototypes are being developed for the deep learning-based extraction and classification of tree species and tree stand features. These prototypes deal with real cases from the monitoring of managed forests, nature conservation and infrastructures. The development is conducted by three enterprises and three research institutes. We will present the first results with different training data sets, AI approaches and scales.
The German Research Center for Artificial Intelligence (DFKI) investigated a multi-temporal deep learning approach to map forests with annotations from the Copernicus Land Monitoring Service and Landsat satellite imagery in the Analysis Ready Data (ARD) format provided by the Global Land Analysis and Discovery team (GLAD). Vision Impulse developed deep learning networks to monitor forests on different spatial scales. They present a forest monitoring service based on very-high-resolution and multi-spectral drone imagery which allows us to quantify the status of forests by extracting tree species, tree diameters, tree heights, and the vitality of individual trees. Together with Technische Universität Berlin, the start-up also created a dataset based on Sentinel-1 and Sentinel-2 images and forest inventory data from the German federal state of Lower Saxony. These approaches allow private forest owners and public authorities to benefit from improved identification of individual trees and extraction of tree health parameters on multiple scales. LUP Potsdam will show the value of AI methods for the identification of multiple tree species classes on high resolution aerial imagery which is available for most German federal state forest and nature conservation agencies. By using highly precise ground truth observations from their own field campaigns in the Free State of Saxony, they investigate novel methods for training data derivation from the multispectral aerial imagery. The developed deep learning prototypes are applied on different spatial domains and also compared with results from standard machine learning models like Random Forest and Support Vector Machine. LiveEO works on the improvement of their existing deep learning architectures and training sampling strategies on tree and tree species detection by using additional datasets like Baumcloud and federal forest data of Brandenburg and Mecklenburg-Western Pomerania. They then applied the developed models on different regions of interest in the field of infrastructure monitoring.
By providing methods for coping with noisy and non-comprehensive training data in the forest sector and developing multi-source/multi-sensor based feature extraction methods, the Remote Sensing Image Analysis Group (RSiM) of the Technische Universität Berlin supports the companies in their fields of research. The Geoinformation in Environmental Planning Lab investigated important spectral and temporal features of the Sentinel Missions for forest stand types in Northern Germany as well as training data quality aspects for explaining strength and weaknesses in tree species mapping with EO data.
A special focus in the presentation will be on the usability of forest inventories as input data for deep learning. By extracting circa 40,000 image samples distributed among 17 forest stand types at the federal state of Lower Saxony, a novel benchmark archive of Sentinel-1, Sentinel-2 time series and 4-band, 20-cm aerial imagery was created. This joint benchmark dataset and the developed prototypes clearly demonstrate on many scales that the multi-temporal, multi-spectral, and multi-sensoral components of EO data largely improve pre-existing deep learning feature extraction methods for tree species and forest monitoring.
The importance of soil spectroscopy is reflected by the increasing number of extensive soil spectral libraries (SSLs) that are generated worldwide. Soil spectral libraries (SSLs) are important big-data archives used with machine learning algorithms to estimate soil attributes. The SSLs are utilized mainly to perform proxy estimations of soil properties by modeling the interaction between soil chromophores (i.e., minerals, soil organic matter (SOC), water) and their spectral response. The proximal models that are based on SSL can be implemented on remote sensing (RS) data at high and medium spectral resolutions. Nevertheless, as different spectral measurement protocols are applied when constructing the SSLs, it is necessary to examine harmonization techniques to merge the data between different SSLs. As SSLs are varied due to the systematic and non-systematic measurement effects, recent studies suggested using an internal soil standard (ISS) to align SSLs from different origins. The ISS method was found to be applicable in several studies that enabled the rectification of laboratory measurements to a motherhood spectrometer. An ISS sample from South West Australia (LB-Lucky Bay) was found to smoothly work as an ISS sample and has been distributed among many laboratories. Unfortunately, existing SSLs that contain thousands of samples are being already measured without using the ISS strategy and thus cannot enter the SSLs rectification process. In this work, we postulate that a spectral transfer function (TF) can be developed between existing SSLs if a small representative subset will be re-measured again with and without using the ISS protocol. Under the WORLDSOILS project*, the effectivity of the proposed method was demonstrated by merging two existing big SSLs: LUCAS (19036 samples) and Brazil (29363 samples). To verify if the TF improves the spectral assessment of soil attributes after harmonizing different protocols, spectral-based models were developed for estimating soil organic carbon (SOC). The results showed high spectral similitude between the ISS and the ISS-TF spectral observations. Furthermore, after merging the SSLs that were harmonized through the ISS-TFs, the spectral–based assessment of SOC was improved considerably.
Eddies are circular rotating water masses, which are usually generated near the large ocean currents, e.g., Gulf Stream. Monitoring eddies and gaining knowledge on eddy statistics over a large region are important for fishery, marine biology studies, and testing ocean models.
At mesoscale, eddies are observed in radar altimetry, and methods have been developed to identify, track and classify them in gridded maps of sea surface height derived from multi-mission data sets. However, this procedure has drawbacks since much information is lost in the gridded maps. Inevitably, the spatial and temporal resolution of the original altimetry data degrades during the gridding process. On the other hand, the task of identifying eddies has been a post-analysis process on the gridded dataset, which is, by far, not meaningful for near-real-time applications or forecasts. In the EDDY project at the University of Bonn, we aim to develop methods for identifying eddies directly from along track altimetry data via machine (deep) learning approach.
At the early stage of the project, we started with gridded altimetry maps to set up and test the machine learning algorithm. The gridded datasets are not limited to multi-mission gridded maps from AVISO, but also include the high resolution (~6 km) ocean modeling simulation dataset (e.g., FESOM, Finite Element Sea ice Ocean Model). Later, the gridded maps are sampled along the real altimetry ground tracks to obtain the single-track altimetry data. Reference data, as the training set for machine learning, will be produced by open-source geometry-based approach (e.g., py-eddy-tracker, Mason et al., 2014) with additional constraints like Okubo-Weiss parameter and Sea Surface Temperature (SST) profile signatures.
In this presentation, we introduce the EDDY project and show the results from the machine learning approach based on gridded datasets for the Gulf stream area for the period 2017, and first results of single-track eddy identification in the region.
Soil moisture exerts an important control on water and energy exchange between the land and the atmosphere. Large scale reliable soil moisture information can inform water resources management, agricultural practices, and drought and flood forecasting. Studies have shown that assimilation of remotely sensed soil moisture into land surface models (LSMs) can improve soil moisture estimation, however, evapotranspiration (ET) estimates are mostly hardly improved by soil moisture assimilation. Most LSMs have an over-simplified representation of groundwater dynamics. In this study, we assimilated soil moisture data from SMAP into the CLM stand alone model and the coupled land surface-subsurface model CLM-ParFlow, components of the Terrestrial Systems Modeling Platform (TSMP), and quantified the added value of assimilating remotely sensing soil moisture data for predicting soil moisture content and ET. The CLM-ParFlow uses Richard’s equation to simulate variably saturated three-dimensional flow in the subsurface and uses a two dimensional kinematic wave approximation for overland flow and river routing. We compared the performance of the two models, which gives an indication of the importance of including groundwater dynamics for predicting soil moisture content and ET. SMAP soil moisture data are assimilated with the Ensemble Kalman Filter (EnKF) on a daily basis, in some simulation scenarios updating both soil moisture and hydraulic parameters. The experiment is conducted for a region (150km × 150km) in Western Germany, with a horizontal grid resolution of 500m, for the period from Mar 2018 to Nov 2018. This provides an opportunity to compare simulated soil moisture contents with CRNS (Cosmic Ray Neutron Sensors) in-situ measurements, and simulated ET with observed ET by EC (Eddy Covariance) stations. It is found that there is no systematic bias between soil moisture simulated by CLM-ParFlow and derived from SMAP observations. DA was able to further improve the characterization of soil moisture contents and improves ET estimation under drought conditions.
Successful weather forecasts start from accurate estimates of the current state of the Earth system. Such estimates are obtained by combining model information with Earth-system observations via data assimilation. Cloud‐affected satellite radiance observations have been at the forefront of recent advances in data assimilation at ECMWF, particularly in the microwave where they provide information at a resolution of tens of kilometres, mainly on the distribution of rain and liquid cloud.
Cloudy radiances in the visible contain a wealth of information on clouds, but have never been assimilated in global numerical weather prediction models. They are available at much higher horizontal resolution than microwave imagery, and are sensitive to the full depth of clouds in the atmosphere. While all-sky radiances at microwave frequencies are routinely included in the operational analysis, assimilating radiances in the visible part of the spectrum continues to pose many challenges. The reason lies in the fact that solar radiation originates from a single point in the sky, rather than from diffuse emission as in the case of microwave and IR radiances. This makes visible radiances much more sensitive to the details of the scattering phase function, and significantly adds to the computational cost of “observation operators”, i.e. the radiative transfer model used to convert model profiles of atmospheric properties into radiances for comparing with the observed values.
We are currently implementing the MFASIS (Method for FAst Satellite Image Synthesis) observation operator at ECMWF for monitoring cloudy reflectances. MFASIS, developed at LMU Munich, is a computationally-efficient model that allows the accurate simulation of cloud-affected visible satellite images, and is suited for operational applications in ECMWF's Integrated Forecasting System (IFS). MFASIS is based on a reflectance look-up table, and the state of the atmosphere is described by only a small number of parameters: the total optical depths and effective radii of water and ice clouds. The reflectance data are derived from the Ocean and Land Colour Instrument (OLCI) 665 nm radiances. OLCI is aboard two satellites, Sentinel-3A and Sentinel-3B, which orbit 140° out of phase with each other. The combined OLCI-A and OLCI-B instruments give good global coverage, with a short revisit time. The data are available within 3 hours of sensing, making them suitable for operational implementation. Initially we are focusing on data over the oceans, due to the added complexities of accounting for surface reflectance over land.
The assimilation of visible satellite data will be extremely important for cloud reanalysis applications, and will open the door to unprecedented exploitation of past, present and future visible cloudy radiance observations. We will present the latest results of our work, showing how the IFS cloud prediction compares to the OLCI reflectances, both before and after the assimilation of other (i.e. non-OLCI) data. These first guess and analysis departures contain information regarding the performance of the model with respect to the observations, and can also reveal potential biases in the observations themselves.
Artificial Intelligence (AI) is an explosively growing field of computer technology, which is expected to transform many aspects of our society in a profound way. AI techniques are used to analyse large amounts of unstructured and heterogeneous data and discover and exploit complex and intricate relations among these data, without recourse to an explicit analytical treatment of those relations. These AI techniques are unavoidable to make sense of the rapidly increasing data deluge and to respond to the challenging new demands in Weather Forecast (WF), Climate Monitoring (CM) and Decadal Prediction (DP). The use of AI techniques can lead simultaneously to: (1) a reduction of human development effort, (2) a more efficient use of computing resources and (3) an increased forecast quality. To realise this potential, a new generation of scientists combining atmospheric science domain knowledge and state-of-the-art AI skills needs to be trained. AI should become a cornerstone of future weather and climate observation and modelling systems.
Practical weather forecast relies on the assimilation of a large number of observations from various types into Numerical Weather Prediction (NWP) models. Since the end of the 20th Century, a breakthrough in the assimilation of global satellite data has resulted in the convergence of the Northern and Southern Hemisphere prediction skill, demonstrating that satellite observations have become the dominant observation source. The current state-of-the-art data assimilation method is the co-called 4D-Var data assimilation. For each of the observation types, a so-called observation operator is implemented, allowing relating the NWP model’s internal parameters to the specific observation type. In the variational data assimilation process, the internal parameters are tuned in order to reproduce the available observations. For the high-density satellite remote sensing observations, some form of data thinning is needed. The current horizontal resolution of the European Center for Mid-Range Weather Forecasting (ECMWF) deterministic forecast is 9 km. A probabilistic ensemble forecast is obtained by running the forecast model multiple times with slightly different initial conditions. To reduce the computational cost, the resolution is reduced to 18 km for the ECMWF ensemble forecast.
In general, it is considered that a higher spatial resolution used in NWP models results in a higher realism of the produced weather forecasts, in particular for precipitation forecasts. Convection-permitting spatial resolutions of the order of 1 km are used for high-resolution NWP models over limited areas. Ideally, they would also be used in global NWP models. However, this is considered to be at the limit of what is feasible following the traditional Central Processing Unit (CPU)-based computing approach, and therefore, alternative computing architectures using Graphical Processing Units (GPUs) have been investigated. For global climate models as well, a convection-permitting spatial resolution is considered highly desirable.
Midrange weather forecasts have skill for a forecast range of the order of two weeks; long-range or seasonal ones have skill for a forecast range of the order of three to six months. There is a growing interest in developing so-called subseasonal to seasonal forecasts to fill in the gap between the midrange and seasonal forecasts.
The standard use of climate models is to predict the expected long-term climate change, e.g., the temperature increase and associated regional climate patterns expected by the end of the century under various greenhouse gas emission scenarios. There is a growing demand and a high societal relevance for so-called climate prediction, which uses initialised climate models to give detailed predictions of the regional climate for the years and decades to come. Within the so-called Digital Europe Program (DEP), the Destination Earth (DestinE) initiative aims to develop Digital Twins (DTs) (a digital twin is a virtual representation acting as the real-time digital counterpart of a physical object or process) of the Earth to study both extreme weather events and climate change.
Observations – with a dominant role for satellite observations - form the basis of weather forecasting and climate monitoring. Typical steps in the processing of weather and climate observations are information retrieval, quality control, bias correction, and data assimilation and/or data fusion. Using the state-of-the-art 4D-Var approach, data assimilation into NWP models is costly both in terms of human development effort—in order to construct specific observation operators for each observation type—and in terms of computing power—in order to solve online the data assimilation cost function minimisation for every assimilation time step.
In [Sønderby et al, 2020], a DL-based, purely data-driven nowcasting of precipitation using sequences of satellite and radar images as the input was presented. The DL network consists of an input spatial encoder, a middle temporal encoder based on ConvLSTMs and an output spatial aggregator. The network directly produces the probability distribution of the precipitation, i.e., a probabilistic precipitation nowcast, with forecast ranges up to 8 h. The performance of the DL nowcasting, compared to reference radar observations, is superior to a convection-permitting NWP with radar data assimilation. It is noteworthy that the DL nowcasting network of thus outperforms the convection-permitting NWP model on its own grounds, namely the one of improved short-term precipitation forecast skill.
[Sønderby et al, 2020] Kaae Sønderby, C.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Agrawal, S.; Hickey, J.; Kalchbrenner, N. MetNet: A Neural Weather Model for Precipitation Forecasting. arXiv 2020, arXiv:2003.12140
Global Navigation Satellite System (GNSS) is mostly known for autonomous global positioning, navigation, and timing services. However, it is also a key tool for many areas of science such as quantification of global-scale geodynamical phenomena or studies on the atmosphere. GNSS is well suitable for these analyses since it offers good global coverage, high temporal resolution, and high accuracy, which is essential for many investigations.
Over the past years, modern smartphones have been developed that facilitate the collection of raw GNSS measurements. These affordable smart devices are utilized by billions of people making crowdsourcing of GNSS observations possible. This forms a great opportunity for investigating the usage of smartphone-based GNSS data for different subjects such as weather forecasting or atmosphere studies at large. By combining the observations acquired by high-quality GNSS stations with a dynamic network of GNSS-capable Internet-of-Things (IoT) devices, the sample of observations can be significantly increased and an unprecedented spatio-temporal resolution can be achieved.
The ESA-funded project CAMALIOT (Application of Machine Learning Technology for GNSS IoT data fusion) aims to collect large amounts of GNSS observations by developing an Android application and conducting a dedicated crowdsourcing campaign. One science use case is to utilize the IoT GNSS data to investigate its feasibility for augmenting modelling and prediction of tropospheric parameters such as zenith wet delay (ZWD). ZWD is strongly related to the water vapour content in the atmosphere, which in turn drives weather systems and climate change to a great extent. Therefore, its quantification is essential for a multitude of applications ranging from the assimilation of weather forecasting to research on understanding the atmospheric water cycle. The CAMALIOT project aims to utilize GNSS observations of various origins and quality to derive a global model that can predict ZWD in space and time for such applications.
Due to the strong connection between ZWD and atmospheric parameters, in particular water vapour, the model prediction can be enhanced by using Earth observation data to achieve higher accuracy. To properly combine GNSS observations with meteorological data, Machine Learning (ML) techniques are used to generate the predicting model.
In this study, we will present the ML framework developed for the atmospheric science use case of the CAMALIOT project and discuss how IoT and crowdsourcing data could be incorporated for scientific analysis. Preliminary investigations related to the prediction of ZWD, utilizing 3000 GNSS stations distributed over Europe are conducted and their result is presented. The performance of various ML methods, such as Linear Regression methods, Random Forest, and Multilayer Perceptron is compared. Furthermore, different feature combinations, as well as training sample sizes are investigated. It is revealed that linear methods are not able to properly reflect the observations. Instead, Random Forest provides the highest model accuracy based on preliminary results.
Radiative transfer in the atmosphere, which describes the evolution of radiation emitted by the Sun, the Earth's surface, clouds, and greenhouse gases, is an essential component of climate and weather modeling. In climate models, the transfer of radiation is approximated by parameterizations. Theoretically, however, with sufficient computing power, the electromagnetic radiation equations could be solved, but in practice this is out of reach. The current operational radiative transfer solver in the Icosahedral Nonhydrostatic Weather and Climate Model (ICON) is ecRad, which, developed at ECMWF, is one of the most advanced available radiative transfer parameterizations. It considers surface optics, gas optics, aerosol optics and cloud optics [1]. It is an accurate radiation parameterization but remains computationally expensive. Therefore, the radiation solver is usually not invoked at every time step and only runs on a reduced spatial grid, which can affect prediction accuracy, or only in a 1D setting without 3D transfer.
In this project, we are trying to develop an ecRad solver improved by machine learning to speed up the computation without loss of accuracy. Machine learning-based parameterizations would in general allow to fully replace existing sub-grid scale parameterizations, once trained from data. However, such parameterizations do not necessarily preserve essential physical quantities, which can lead to instabilities, model drifts or unphysical behavior as observed in [2] and [3].
Our research focuses on two methods: random forests and physics-informed neural networks. Random forest are ensembles of decision trees which possess energy conservation properties [4]. In contrast, neural networks do not preserve energy. We address this issue in two ways. First, by constraining the neural network and adding energy conservation in its loss function. Second, we continue to call ecRad at constant though significantly reduced time intervals and on a reduced spatial grid thereby using it as a regulator while reducing computation costs. The underlying idea is to avoid unphysical climate drifts and to support the generalization capabilities of the machine learning method.
Our first numerical experiments on an aqua planet simulation are promising. For this idealized simulation, both random forest methods and neural networks predict radiative fluxes with high accuracy. We hope to obtain a valuable outcome when considering more complex datasets with seasonality and realistic topography. Our final goal is to run a full ICON simulation with a machine learning enhanced ecRad parameterization, though the online performance remains open. There the energy conservation is expected to play a central role in model stability.
[1] Hogan, R. J., & Bozzo, A. (2018). A flexible and efficient radiation scheme for the ECMWF model. Journal of Advances in Modeling Earth Systems, 10, 1990-2008.
[2] Brenowitz, N. D., & Bretherton, C. S. (2018). Prognostic validation of a neural network unified physics parameterization. Geophysical Research Letters, 17, 6289–6298
[3] Brenowitz, N. D., and Bretherton, C. S. (2019), Spatially Extended Tests of a Neural Network Parametrization Trained by Coarse‐Graining, J. Adv. Model. Earth Syst., 11, 2728– 2744
[4] Yuval, J., and O’Gorman, P.A (2020). Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nat Commun 11, 3295 (2020)
Modeling and understanding the Earth system using data-driven algorithms has become an active field of science and of Earth sciences in particular. Modeling the complex interactions among variables in both space and time is a constant and challenging endeavour. Mechanistic models derived from first principles explain variable interaction and evolution. When deriving such models is not possible, complex or uncertain, machine learning models can be an alternative [1]. Currently, Earth observation provides almost continuous space and time sampling of the Earth system, which has been used to monitor our planet with advanced, semi-automatic algorithms able to classify and detect objects and changes [2], and to retrieve relevant biogeophysical parameters of interest [3].
In this work we explore the use of machine learning techniques to characterize the complex Earth system dynamics. Systems are typically modeled with a set of ODEs from stochastic variables. Many methods are now available to learn ODEs from data: equation-free modeling, empirical dynamic modeling, modeling emergent behavior, and automated inference of dynamics are some examples. In this work we explore the use of spectral decomposition of kernel transfer operators. In the study of nonlinear complex systems' behavior, transfer operators, like the Perron-Frobenius operator or the Koopman operator allow the detection of system’s time scales and the identification of metastable states in dynamic systems [4]. Transfer operators have been generalized to Reproducing Kernel Hilbert Spaces (RKHS) with the so-called kernel transfer operators, whose construction is data-driven [5].
We tackle the characterization of the terrestrial biosphere dynamics. For this, we summarize the state of an ecosystem using 12 variables collected and curated in the Earth System Data Cube (ESDC) describing the biosphere: black sky albedo, evaporation, evaporative stress fAPAR, gross primary productivity (GPP), latent energy (LE), net ecosystem exchange (NEE), root-zone soil moisture, sensible heat, surface soil moisture, terrestrial ecosystem respiration (TER), and white sky albedo [6]. Data has a common spatiotemporal grid (0.25o in space, and 8 days in time over a decade 2001-2011). The spatio-temporal data cubes were used to derive eigen-decompositions of kernel transfer operators, which allows us to estimate properties of the Earth system’s dynamics, allowing us to carry out a qualitative study of this system, and characterize its behavior and scales. The derived decompositions also enable the interaction with other machine learning methodologies, such as explainable or causal models, or in optimal control and reinforcement learning acting on the decomposition instead of the data or a complicated model.
References
[1] M. Reichstein, G. Camps-Valls, B. Stevens, J. Denzler, N. Carvalhais, M. Jung, and Prabhat, “Deep learning and process understanding for data-drivenEarth system science,” Nature, vol. 566, pp. 195–204, Feb 2019.
[2] G. Camps-Valls, D. Tuia, L. Bruzzone, and J.A. Benediktsson, “Advances in hyperspectral image classification: Earth monitoring with statistical learning methods,” IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 45–54, 2014.
[3] G. Camps-Valls, J. Verrelst, J. Muñoz-Marí, V. Laparra, F. Mateo-Jimenez, and J. Gomez-Dans, “A survey on gaussian processes for earth observation data analysis: A comprehensive investigation,” IEEE Geoscience and Remote Sensing Magazine, , no. 6, June 2016.
[4] Schmid, P.J. (2020). Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5-28, 2010.
[5] Klus, S., Schuster, I. & Muandet, K. (2020). Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces. Journal of Nonlinear Science 30, 283–315.
[6] G. Kraemer, G. Camps-Valls, M. Reichstein, J. Smits, and M. D. Mahecha, “Summarizing the state of the terrestrial biosphere in few dimensions,” Biogeosciences, 2020.
Change detection is essential to many areas of remote sensing and is used for tasks ranging from monitoring climate change to detecting natural disasters such as deforestation, flooding and fires. Combined with volumes of high-resolution satellite time series data, which is now becoming freely available to researches around the world, the change detection algorithms make it possible to extract accurate temporal and spacial information about disturbances across the earth surface and to evaluate the impact of global warming.
While a new generation of neural-network-based methods is being developed with some promising results, the bulk of the well established approaches in this area are grounded in linear-regression-based time series analysis. These methods have roots in econometrics and most of them were developed back when the resolution of the satellite imagery was much lower. Since then, both the resolution and the length of time series data have increased considerably. Processing such amounts of data using the existing implementations would require specialized hardware and huge amounts of computational resources.
In this work, we accelerate two fundamentally different time series analysis methods using general-purpose computing on graphics processing units (GPGPU), which has been established as an effective approach to massively-parallel scientific programming. The resulting parallel implementations are up to 3-4 orders of magnitude faster than the existing solutions. Tasks that would have taken weeks or even months to execute with the existing implementations, now can be computed in hours or days.
The first method that we cover is the seasonal and trend decomposition using LOESS (STL), which can be used to extract the seasonal and gradual changes. STL is an important tool for many types of time series analyses, but becomes computationally expensive when applied to large datasets. To our knowledge, this work presents the first GPU implementation for the STL decomposition and demonstrates its effectiveness with a systematic experimental evaluation.
In the second part, we show how efficient GPU parallelization can be applied to abrupt change detection, caused by disturbances. Gieseke et al. 2020 showed that efficient parallelization techniques make it possible to detect single breakpoints on commodity GPU hardware and achieved considerable speedups. Multiple breakpoint detection is a much more complicated and compute-intensive task. We present a GPU implementation of BRAST Lite, which is a modern lightweight approach to change detection.
Our implementation of both methods is available as open source Python packages that support both CUDA and OpenCL
References:
F. Gieseke, S. Rosca, T. Henriksen, J. Verbesselt, and C. E. Oancea, Massively parallel change detection for satellite time series data with missing values, in ICDE, 2020, pp. 385–396.
R. B. Cleveland, W. S. Cleveland, J. E. McRae, and I. Terpenning, Stl: A seasonal-trend decomposition, J. Off. Stat, 6 (1990), pp. 3–73.
D. Masiliūnas, N. E. Tsendbazar, M. Herold, and J. Verbesselt, BFAST Lite: A Lightweight Break Detection Method for Time Series Analysis. Remote Sensing. 2021; 13(16):3308.
Because it allows to quantify the amount of surface freshwater available to life form and human activities, river discharge is an essential variable of interest. However, it also remains a complicated variable to observe and measure, especially from space using satellites. Indirect approaches infer discharge from models combined with river-related observations such as water surface elevations and/or widths. Our area of work focuses here on one-dimensional study employing inverse methods to infer river discharge. Our issue lies in the fact that those inverse approaches generally require a first estimation or first guess value of the discharge as initialization and their performances greatly depend on the quality of such first guesses. Our study carries on the pioneer work of [Beck et al. 2015] where global gridded maps of streamflow characteristics of small-to-medium-sized natural catchments were derived from neural network ensemble models. The objective is then to train models able to estimate at once river discharge distribution – as a set of daily flow percentiles – at global scale. For the present work, we first used up-to-date datasets of climate, topography, land cover, geology, and soil variables; and supplemented them with a set of river geomorphology variables. All these describing features and parameters are still used as input variables for the discharge distribution models we seek to build. The wider initial training dataset has been fully used to train neural-network-ensemble-based models over both small-to-medium-sized catchments and large-sized ones; and both natural and anthropogenic-influenced catchments. Overall, our results show that we were able to train satisfying models for all-kind of catchments and get the full river flow distribution all at once. Finally, to better fit our one-dimensional applications, the resulting products was reprojected over a global river centerlines map. To assess its potential contribution, the produced discharge distributions were first evaluated against the GRADES discharge vector product through direct comparison and when used as first guess in a discharge inferring inverse application.
References
Beck, H. E., de Roo, A., & van Dijk, A. I. J. M. (2015). Global Maps of Streamflow Characteristics Based on Observations from Several Thousand Catchments, Journal of Hydrometeorology, 16(4), 1478-1501, https://doi.org/10.1175/JHM-D-14-0155.1.
Soil moisture (θ) is an essential parameter in irrigation management, transport of pollutants and estimation of energy, heat and water balances. In climate systems, θ drives plant transpiration and photosynthesis and impacts land-atmosphere interactions. There are two common approaches for soil moisture measurements which include in-situ probes and satellite sensors. The in-situ measurements are point measurements with limited spatial representativeness, but they are a great source of providing real-time up to date information. However, the major limiting factors for the widespread deployment of in-situ networks are cost and complexity of the network deployment to cover a large geographical area. The satellites based soil moisture observation can generally produce global scale covering a large geo-graphical area but provide very coarse resolution which is often not suitable for more accurate estimations.
In this paper, we present an AI driven framework that is capable of ingesting multi-modal data including in-situ sensor measurements, aerial surveys including drones footage, and satellite observations. Using our platform, we can produce high resolution soil moisture estimation for a large geographical area. Our proposed multi-layer multi-modal framework can combine satellite, drone observations and in-situ sensor observations to train neural networks which can provide more accurate soil moisture estimation covering a large geographical area. The two major contributions made in this paper are, (i) the spatial variability and temporal stability of surface soil moisture are captured with multilayer perceptron (MLP) and long-short term memory (LSTM) respectively. The two neural networks are merged into the same model and trained end to end; (ii) a framework using low cost soil moisture sensors for real time accurate soil moisture estimation of a large region is presented.
The complex interaction between various environmental variables such as soil texture and structure, topographic features, land cover and land use, etc., makes the surface soil moisture vary at field scales, but certain stability of the soil moisture can be observed throughout a range of temporal scales. Rather than isolating and measuring those factors in the presence of the temporal stability (TS) of soil moisture, we propose the cell state vector of LSTM to capture the TS implicitly. The spatial variability of the soil moisture is modelled with a multilayer perceptron. The microwave observations and optical measurements from satellites are fused in the model and downscaled for a high resolution product. Different from many downscaling methods which use a single in-situ value as the reference data in each training, our MLP takes the cell state vector from the LSTM as the target. This makes the model more robust to the representative errors in the data. It also helps relax the constraints in temporal collocation among multiple remote sensing products. Taking the cell state vector as a bridge, the LSTM and MLP can be concatenated and the framework can be trained end to end.
A low-cost sensor network consisting of up to 10 capacitance based sensors and 1 time domain-reflectometry (TDR) sensor was used to collect the in-situ θ data at a grassland site (Johnstown Castle, Ireland). A portion of the in-situ θ was used as the seed input into the framework while the associated spaceborne data was retrieved and ingested in real-time. The framework generates the cell state vector with the spaceborne data internally and continuously outputs the soil moisture estimation. The estimated values are then validated with the in-situ measurements in root mean squared difference.
Acknowledgement: This research was carried out as part of the Terrain AI project, funded by Microsoft Ireland and SFI (Science Foundation Ireland) under Grant number [SFI 20/SPP/3705].
The models participating in the Coupled Model Intercomparison Project Phase 6 (CMIP6) deliver insights on the evolution of the Earth's climate. The global precipitation changes follow the magnitude of the warming according to a recent study (Tebaldi, Debeire, Eyring et al., 2020) of the CMIP6 ensemble-mean. However, Earth systems models exhibit a large range in simulated precipitation projection over land.
In this study, we present a potential causal discovery approach to constrain the precipitation change over land globally. This approach performs a process-oriented model evaluation similar to Nowack et al. (2020).
Our method consists of two main steps:
First, we reduce the spatial dimensions of the preprocessed reanalysis data and the CMIP6 ensembles data. This is performed by learning a PCA-varimax transformation on the reanalysis dataset. We then apply the same transformation to the CMIP6 ensembles data. This step allows us to reduce the number of spatial dimensions to a smaller set of components. The components have the property of being spatially confined and can be easily interpreted. We understand them as the main modes of variability.
Then, in a second step, we estimate the time-lagged dependency structure between the modes of variability, which is done by applying the PCMCI causal discovery algorithm (Runge et al., 2019). The algorithm outputs a causal graph for each dataset. Each node of the causal graph represents a mode of variability and each edge indicates the presence of a time-lagged causal dependency between the modes.
Finally, we evaluate the ability of CMIP6 models to represent atmospheric dynamical interactions by comparing causal networks learned from their simulations to those derived from the reanalysis dataset. We indeed find a relationship between the ability of the CMIP models to represent dynamical interactions and projected precipitation changes over land. In addition, we test the robustness of the relationship for different reanalysis datasets and for different time resolutions.
References:
Nowack, P., Runge, J., Eyring, V. et al. Causal networks for climate model evaluation and constrained projections. Nat Commun 11, 1415. https://doi.org/10.1038/s41467-020-15195-y, 2020.
Runge, J., P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic, Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996, 2019.
Runge, J, Causal Network Reconstruction from Time Series: From Theoretical Assumptions to Practical Estimation. Chaos: An Interdisciplinary Journal of Nonlinear Science 28 (7): 075310. https://aip.scitation.org/doi/10.1063/1.5025050, 2018.
Tebaldi, C., Debeire, K., Eyring, V., Fischer, E., Fyfe, J., Friedlingstein, P., Knutti, R., Lowe, J., O'Neill, B., Sanderson, B., van Vuuren, D., Riahi, K., Meinshausen, M., Nicholls, Z., Hurtt, G., Kriegler, E., Lamarque, J.-F., Meehl, G., Moss, R., Bauer, S. E., Boucher, O., Brovkin, V., Golaz, J.-C., Gualdi, S., Guo, H., John, J. G., Kharin, S., Koshiro, T., Ma, L., Olivié, D., Panickal, S., Qiao, F., Rosenbloom, N., Schupfner, M., Seferian, R., Song, Z., Steger, C., Sellar, A., Swart, N., Tachiiri, K., Tatebe, H., Voldoire, A., Volodin, E., Wyser, K., Xin, X., Xinyao, R., Yang, S., Yu, Y., and Ziehn, T.: Climate model projections from the Scenario Model Intercomparison Project (ScenarioMIP) of CMIP6, Earth Syst. Dynam. Discuss. [preprint], https://doi.org/10.5194/esd-2020-68, in review, 2020.
Weather and climate are well known exemplars of chaotic systems exhibiting extreme sensitivity to initial conditions. Initial condition errors are subject to exponential growth on average, but the rate and the characteristic of such growth is highly state dependent. In an ideal setting where the degree of predictability of the system is known in real-time, it may be possible and beneficial to take adaptive measures. For instance a local decrease of predictability may be counteracted by increasing the time- or space-resolution of the model computation or the ensemble size in the context of ensemble-based data assimilation or probabilistic forecasting.
Local Lyapunov exponents (LLEs) describe growth rates along a finite-time section of a system trajectory. This makes the LLEs the ideal quantities to measure the local degree of predictability, yet a main bottleneck for their real-time use in operational scenarios is the huge computational cost. Calculating LLEs involves computing a long trajectory of the system, propagating perturbations with the tangent linear model, and repeatedly orthogonalising them. We investigate if machine learning (ML) methods can estimate the LLEs based only on information from the system’s solution, thus avoiding the need to evolve perturbations via the tangent linear model. We test the ability of four algorithms (regression tree, multilayer perceptron, convolutional neural network and long short-term memory network) to perform this task in two prototypical low dimensional chaotic dynamical systems. Our results suggest that the accuracy of the ML predictions is highly dependent upon the nature of the distribution of the LLE values in phase space: large prediction errors occur in regions of the attractor where the LLE values are highly non-smooth. In line with classical dynamical systems studies, the neutral LLE is more difficult to predict. We show that a comparatively simple regression tree can achieve performance that is similar to sophisticated neural networks, and that the success of ML strategies for exploiting the temporal structure of data depends on the system dynamics.
Earth observation at high spatial and temporal resolution is now possible with the availability of both Copernicus Sentinels and Landsat sensors. Monitoring the terrestrial biosphere at meter scales is of paramount relevance for detecting changes and anomalies as well as to retrieve biophysical parameters with unprecedented accuracy. Exploiting the temporal information is key for assessing the status and health of vegetation covers, and to derive phenology models of crops and monitoring forests. Nevertheless, optical remote sensing data are typically hampered by the presence of clouds, shadows and sensor distortions which impede effectively analyzing the Earth's surface through time [Shen 2015].
In this context, Earth surface gap filling and forecasting can help in improving existing datasets and deriving useful products for monitoring ecosystems [Goward 2001], as well as to estimate impacts of catastrophes and extreme events, whose number and severity have significantly increased in recent years due to climate change. Reliable and explainable forecasts could be directly translated into improved decision-making by governmental bodies, as well as the avoidance of potential human casualties and biodiversity losses.
Gap filling has been traditionally tackled from simple thresholding, rank-based (e.g., mean, median, extreme filters), and polynomial fitting approaches [Kandasamy 2013] to more advanced Kalman filtering [Moreno-Martinez, 2020], Gaussian processes [Svendsen 2021] and (recurrent) neural networks [Pelletier 2019]. Yet, it is customary to approach the problem in a pixel-wise manner. Recently, Earth surface forecasting has been addressed as a guided video prediction task, for which several deep learning models have shown promising results [Requena-Mesa, 2021]. However, the quality of the spatio-temporal series used as input to these models, which have been acquired by satellites and weather stations, is often low. In practise, these sequences present accidental noise as a result of clouds, shadows or passing airplanes; signal instability or noise originated from the acquiring sensors. The traditional approach to cope with this noise involves the usage of quality masks, which serves to discard the corrupted regions during training and for evaluation purposes. This reduction of the effective data for learning implies a perceptual uncertainty, i.e., the limitation of the knowledge of the real state of the world provided by noisy observations, which compromises the accuracy of the predictions.
In this work, we alternatively introduce a deep convolutional neural network for spatio-temporal remote sensing data interpolation and gap filling. The method is purely unsupervised, learns entirely from noisy data and can cope with different noise sources and distortions.
Our approach is evaluated both qualitatively and quantitatively in the interpolation of Sentinel-2 [Requena-Mesa, 2021] and alternatively Landsat [Moreno-Martinez, 2020] time series. Quantitative and qualitative analyses of results are conducted, not only analyzing the visual coherence of the resulting denoised RGB+NIR channels, but also computing vegetation indices from them, which describe the ecosystem state and evolution. We evaluate both the standard Normalized Difference Vegetation Index (NDVI) time series and the kernel NDVI (kNDVI), which highly correlates with vegetation photosynthetic activity and consistently improves accuracy in monitoring key parameters, such as leaf area index, gross primary productivity, and sun-induced chlorophyll fluorescence [Camps-Valls, 2021b].
References
Goward, S. N., Masek, J. G., Williams, D. L., Irons, J. R., & Thompson, R. J. (2001). The Landsat 7 mission: Terrestrial research and applications for the 21st century. Remote Sensing of Environment, 78(1-2), 3-12.
Kandasamy, S., Baret, F., Verger, A., Neveux, P., & Weiss, M. (2013). A comparison of methods for smoothing and gap filling time series of remote sensing observations–application to MODIS LAI products. Biogeosciences, 10(6), 4055-4071.
Pelletier, C., Webb, G. I., & Petitjean, F. (2019). Temporal convolutional neural network for the classification of satellite image time series. Remote Sensing, 11(5), 523.
Requena-Mesa, C., Benson, V., Reichstein, M., Runge, J., & Denzler, J. (2021). EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1132-1142).
Shen, H., Li, X., Cheng, Q., Zeng, C., Yang, G., Li, H., & Zhang, L. (2015). Missing information reconstruction of remote sensing data: A technical review. IEEE Geoscience and Remote Sensing Magazine, 3(3), 61-85.
Svendsen, D. H., Piles, M., Muñoz-Marí, J., Luengo, D., Martino, L., & Camps-Valls, G. (2021). Integrating domain knowledge in data-driven earth observation with process convolutions. IEEE Transactions on Geoscience and Remote Sensing.
Gross primary productivity (GPP) is the largest carbon influx in terrestrial ecosystems and a key component for diagnosing the global carbon cycle. Light use efficiency (LUE) models are typical approaches for estimating GPP, defining GPP as a product between absorbed photosynthetically active radiation (APAR) and a maximum light use efficiency (εmax) scaled by sensitivity functions of environmental factors. However, the environmental factors and the functional forms of these sensitivity functions differ across LUE models, indicating the need to assess the likelihood of different environmental factors, sensitivity functions and model structures. In these models, parameters controlling the sensitivity of GPP to environmental factors are usually prescribed according to biomes or plant functional types, and/or calibrated using the observed GPP. Using biome-specific parameterization nevertheless neglects the within-biome variability of vegetation properties, challenging the model parameterizations in regions without observational data. The response of GPP to these environmental factors under global climate change estimated by the relative sensitivity function therefore remain large uncertainties on the global scale. Additionally, these models have been developed mostly assuming a big-leaf or a two-leaf canopy approximation approach. Given the uncertainties resulting from the selection of drivers, functional forms, canopy approximation approaches and parameterization, we explore the information contained in satellite-based and in situ Earth observation datasets to answer the following questions:
1) which environmental factors are the dominant drivers of the variability of GPP? And what functional form can represent the response of GPP to these environmental factors?
2) under which conditions two-leaf model structures provide better GPP estimates compared to more common big-leaf approaches?
3) can the calibrated parameters be upscaled to the global scale?
First, we collected from literature sensitivity functions of temperature (fT), vapor pressure deficit (fVPD) which includes the effect of CO2, soil water supply (fW), APAR (fL) and diffuse radiation fraction (represented by CI; the function is named as fCI). To test the null hypothesis that GPP is not sensitive to these environmental factors, we additionally included functions representing no sensitivities to temperature (T), vapour pressure deficit (VPD), soil water supply (W), CI, CO2 or no light saturation. The results showed that overall T, VPD, W, light saturation, diffuse fraction and CO2 are all dominant drivers for GPP but with different contributions across biomes and climate types. We find that none of the previously published model structures performed as well as the best big-leaf model (BLbest) selected. From daily and weekly to monthly scales, the best model’s median NSE across sites was 0.73, 0.79 and 0.82, respectively, but poorer at annual scales (NSE=0.23), emphasizing the common limitation of current models in describing the interannual variability of GPP.
We further approached hypothesis testing via changes in model structures given that LUE models can be divided into big-leaf and two-leaf models, according to if the APAR and εmax were differentiated between sunlit and shaded leaves in a vegetation canopy. To assess the difference between big-leaf and two-leaf LUE models, we selected the best two-leaf model (TLbest) according to the same method as the one for big-leaf models. On the global scale, BLbest and TLbest were not significantly different (p > 0.05 according to the Kolmogorov-Smirnov test between the site-level NSE). However, two-leaf approaches were more robust in explaining GPP dynamics when the leaf area index was high ( > 1.5m2/m2) under dry and hot weather conditions (T > 18°C and VPD > 1.2kPa). Our study emphasizes the importance of within-canopy GPP dynamics for diagnosing the carbon assimilation in global warming and drying trend.
Lastly, we test the hypothesis that one or multiple calibrated parameters can be predicted based on vegetation, climate and soil properties that affect the sensitivity of GPP. First, we prescribed the parameters using the mean, median per biome and the global mean and median of calibrated parameters. The results showed poor GPP estimates (NSE < 0 at more than 42% sites) even though the functional form and model structure was the best, highlighting limitations in regionalization approaches based on plant functional types. Second, we collect the vegetation index and long-term (30 years) climate data to predict the parameters using random forest and neural network methods, respectively. The predicted parameters will be compared with the globally optimized parameters based on Earth System data cube and Sun-induced Fluorescence. These results can help us understand the underlying controls on parameter variability to predict their spatial patterns.
In general, our study from assessing environmental sensitivity functions and canopy structures of LUE models to upscaling LUE model parameters based on Earth observation data highlights the importance to reduce the model uncertainties due to the different drivers, functional forms, and parameterization approaches.
Soil moisture (SM) is a key variable which controls the exchange of water, energy and carbon fluxes between the land surface and atmosphere. Therefore, accurate characterization of spatial distribution and temporal variations of SM is critical for many regional–scale applications, including meteorology, hydrology, flood forecasting, drought monitoring, agriculture and climate change impact studies. Many global estimates of surface SM are provided by satellite sensors, but at coarse spatial resolutions (lower than 25 km), which are not very suitable for regional hydrologic and agriculture applications. Here we use a parallel data assimilation framework (PDAF) to assimilate coarse-resolution satellite derived soil moisture data into the community land model (CLM3.5). Using this framework, we assimilate the surface SM data from the European Space Agency Climate Change Initiative (ESA-CCI) using an Ensemble Kalman Filter (EnKF) into CLM3.5, producing a 16 years (2000–2015) high-resolution spatially and temporally consistent surface soil moisture reanalysis dataset (3 km, daily) over Europe. Given the big data volumes produced from these ensemble-based simulations, the parallel Helmholtz Analytics Toolkit (HeAT) tool is used to accelerate data analysis and post-processing of the ensemble-based model outputs. We validate our results with daily time series of observed surface SM data from 112 in-situ stations across Europe. This comparison shows that the assimilated SM captures the daily, seasonal and annual variations in soil moisture fairly well, with RMSE ranges 0.04 to 0.06 m³m⁻³ and overall correlation above 0.50 for most stations. In this presentation, we describe the validation of this newly created surface SM reanalysis with in-situ observations, global satellite and reanalysis products, and present benchmark results showing the computational efficiency of the workflow using high performance computing infrastructure. The dataset presented here provides long-term daily surface SM at a high spatiotemporal resolution and will be beneficial for many hydrological applications from regional to continental scales.
Satellite altimetry (SA) provides spatial and temporal sea level measurements with respect to an earth-fixed geocentric reference frame. There are however some challenges using the data to retrieve spatio-temporal distribution of sea levels. One of the major constraints is the lack of measured data due to the satellite revisiting time (cycles). This could be overcome by extending SA data temporarily (resampling) to find a continuous trend of sea level. To do this, multiple time series analysis models have been used where the periodic terms and linear trends of sea level variations are fitted and the data gap of SA resampled. The sea level raise trend is significant at the location of studied TGs along the coast in Baltic Sea.
In this study the 24 years SA data are resampled to restive more precise sea level trend in the Baltic Sea. The SA data are compiled by eight SA missions including ERS-2, Envisat, SARAL, Jason-1, Jason-2, Jason-3, Sentinel-3A and Sentinel-3B at a particular radius near each tide gauge. The SA data are resampled by machine learning methods including Autoregression (AR), Moving Average (MA), Autoregressive Moving Average (ARMA), Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving-Average (SARIMA), GARCH) Generalized Autoregressive Conditional Heteroscedastic (GARCH), etc., to fill the gaps and validated by comparison with tide gauges (TG) observations. The sea level trend at each tide gauge computed from satellite altimetry and tide gauges during 1995 to 2019. This methodology also requires the utilization of high-resolution geoid models to maximize the opportunities in deriving realistic sea level using SA data. Also, careful spatial data selection and outliers’ removal using data screening is prerequisite.
Grassland management in intensive agriculture environments is increasingly constrained by environmental regulations, aiming for more sustainable production. Real-time prediction of grassland productivity and timely monitoring of grasslands are essential in order to make these agricultural activities more sustainable and ensuring food security. Finding the best time for mowing is a function of biomass productivity and quality. Agroecosystem models have been proven as suitable tools for simulating grass growth under the influence of soil, climate and management. However, due to factors such as heterogeneity in plant driving variables, lack of information on soil and management data, and grass species, as well as due to uncertainty in agroecosystem models, predicting grassland productivity using these models has remained a challenge. Remote sensing (RS) have the potential to provide timely and frequent observations of the land surface at a range of spatial scales, which can be useful for measuring the productivity of grasslands. However, the approach suffers from gaps such as revisiting frequency, cloud cover, measuring limited sets of variables, and ability for future prediction. Therefore, it is desirable to combine crop growth models and RS observations using data assimilation techniques. In this study, we exploited the best properties of RS data and combined it with the predictive and explanatory abilities of crop growth models. We implemented particle filtering algorithm to assimilate LAI (Leaf Area Index) values driven from Sentinel-2 into an agroecosystem model at four case studies covering different parts of Germany. We estimated the initial uncertainty ranges of model parameters from a multi-objective uncertainty-based calibration algorithm. Additionally, we compared the suitability of different spatial resolutions in reducing the uncertainty in estimated parameters and improving the performance of grassland models for predicting grassland productivity during different time within growing seasons. We found that reducing spatial resolutions of LAI from original 10 m to over 100 m, decrease the accuracy in the estimates of the grassland productivity substantially. However, the computational time decreased which was an advantage for data assimilation at national scales. We concluded that it is important to consider reasonable spatiotemporal scales and to obtain trade-offs between accuracy and effectiveness depending on the scale of the area under study.
Clouds play a key role in weather and climate but are quite challenging to simulate with global climate models as the relevant physics include non-linear processes on scales covering several orders of magnitude in both the temporal and spatial dimensions. The numerical representation of clouds in global climate models therefore requires a high degree of parameterization, which makes a careful evaluation a prerequisite not only for assessing the skill in reproducing observed climate but also for building confidence in projections of future climate change. Current methods to achieve this usually involve the comparison of multiple large-scale physical properties in the model output to observational data. Here, we introduce a two-stage data-driven machine learning framework for process-oriented evaluation of clouds in climate models based directly on widely known cloud types. The first step relies on CloudSat satellite data to assign cloud labels in line with cloud types defined by the World Meteorological Organization (WMO) to MODIS pixels using deep neural networks. Since the method is supervised and trained on labels provided by CloudSat, the predicted cloud types remain objective and do not require a posteriori labeling. The second step consists of a regression algorithm that predicts fractional cloud types from retrieved cloud physical variables. This step aims to ensure that the method can be used with any data set providing physical variables comparable to MODIS. In particular, we use a Random Forest regression that acts as a transfer model to evaluate the spatially relatively coarse output of climate models and allows the use of varying input features. As a proof of concept, the method is applied to coarse grained ESA Cloud CCI data. The predicted cloud type distributions are physically consistent and show the expected features of the different cloud types. This demonstrates how advanced observational products can be used with this method to obtain cloud type distributions from coarse data, allowing for a process-based evaluation of clouds in climate models.
Ocean dynamics are essential for the functioning of the Earth system with an important role in climate regulation and maintenance of global biodiversity. Several processes (regional and global) can drive storage and transport of heat, carbon, nutrients, and marine organisms and are crucial for providing many ecosystems’ goods and services that enable life on Earth. Ocean dynamics are regulated by processes interacting and operating over wide ranges of spatial and temporal scales, and inherently involve both horizontal and vertical dimensions, making them exceedingly difficult to monitor and to understand fully. Ocean observing system including remote sensing and in-situ platforms have been developed to enable the description of these intricated processes and to develop the knowledge required for the sustainable management of ocean resources.
Over the last decades, the collection of high-quality vertical ocean profiles has increased considerably after the introduction of the Argo program. Argo floats acquire accurate hydrographic profiles of ocean variables, such as temperature and salinity, down to a depth of 1000-2000 m in different regions of the global ocean. The spatio-temporal distribution of these in-situ measurements is extremely sparse when compared to satellite products. On the other hand, while satellite data have impressive spatio-temporal coverage, they provide information only about surface properties. Methods to utilize both sparse in-situ data and remotely sensed satellite data into reliable 3D estimates of the ocean state represent a powerful tool to estimate and map ocean dynamics, to provide information about the state of the marine ecosystem and to inform about long term climate changes in different regions.
In this work, we present new methods for reconstructing salinity, temperature, and steric height down to a depth of 1500 m using satellite-based surface measurements and in-situ hydrographic profiles. Previous works in this field have analyzed different machine learning and deep learning methods for the solution of the problem in specific regions, and more classical statistical techniques for global reconstructions. In this paper, we propose a reconstruction technique based on a supervised probabilistic encoder-decoder neural network. The model combines several types of satellite- and auxiliary data which can be acquired globally, resulting in near-global reconstruction capabilities. This encoder-decoder structure allows the relations between input-output to be represented in a sparse feature space in an attempt to understand what generates the output and to estimate and explain underlying structures. In this way, the output data are estimated not from the input data directly, but from low dimensional hidden representations of them.
The novel feature of our approach lies not with the utilization of historic data for deep learning, rather the use of a neural network architectures that may allow us to model the underlying physical processes, not directly captured in the input data, when describing the vertical hydrographic profiles. Moreover, the output of the model will not be deterministic as in previous studies but will instead be probabilistic with regard to the location and shape parameters for the reconstructed profiles, thus adding uncertainty estimations of model outputs. Employing both the uncertainties of our model and the residuals of the known data, we intend to investigate the possibility of detecting anomalous events in the ocean states.
Compressive sensing (CS) techniques have been widely used in enhancing SAR Earth observation applications such
as SAR imaging. We have oriented our research toward two directions. For the first one, a CS-based processor inspired
by the classic Back-Projection (BP) algorithm called CS-BP-2D [1],[3] has been developed. It is able to generate
SAR images by exploiting the natural sparsity of various scenes for both monostatic and bistatic acquisition
scenarios. The second one explored the context of a ground-based receiver of opportunity (COBIS) for Sentinel-1
transmitter for which we have designed CS framework for resolution enhancement from sparse multi-aperture data
[2],[4].
CS-BP-2D main features include: the derivation of a sparsifying dictionary accomplished i.e. the BP operator,
for a pre-defined grid (typically a ground imaging grid) and the implementation a non-linear filter based on simulated
SAR system point-spread-function. The greedy solver OMP was employed with empirically determined stop
criteria. It is worth pointing out that CS-BP-2D framework has a flexible design regarding its accommodation on large
scenes and raw/ range compressed data. CS-BP-2D enhances the SAR image readability and since the recovered
image stores only the magnitude, the phase, and the position of the reconstructed SAR image, it reduces the storage
burden. We provide in [1] and [3] an extensive set of results on both simulated and real-world data. For the latter,
Sentinel-1 raw data acquired on TOPSAR mode (IW) over Bucharest, Romania was employed.
The experimental setup for the second research direction consisted of COBIS, a ground-based receiver,
equiped with 4 receiving channels, one oriented toward Sentinel-1 (synchronization channel), the other three oriented
to different places in Bucharest (image channels). The operational scanning mode of Sentinel-1 in the neighborhood
of COBIS location (University „Politehnica” of Bucharest) is TOPSAR (IW). As proved in [2] and [4], not only does the
transmitter antenna main lobe corresponding to the scanned swath generate scene reflections for the receiver but also
side-lobes and side-sections of the main lobe, when the transmitter beam „looks” toward the other two swaths. The
illuminated scene might benefit from the enlarged Doppler domain diversity. Since the available data has many gaps
over the multi-aperture time interval, a naive integration would increase the resolution at the cost of introducing
unwanted point-spread-function side-lobes. To cope with this problem, we introduce a processing flow containg the
following steps: azimuth antenna compensation using the syncronization channel data, azimuth resampling, linear
range-cell migration through Keystone Transform (KT), azimuth re-ramping, CS reconstruction via chirp sparsifyng
basis, azimuth de-ramping and inverse KT. After the azimuth profiles were reconstructed the SAR image is formed
by applying the BP algorithm. The CS reconstruction quality of different classes of CS solvers is provided together with
a discussion on their limitations (signal-to-noise ratio, parameters tuning). As results, we provide SAR images for Mill
Lake Dam and Mill Lake Peninsula in Bucharest, Romania.
[1] A. Focsa, A. Anghel, M. Datcu and S. -A. Toma, "Mixed Compressive Sensing Back-Projection for SAR Focusing
on Geocoded Grid," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14,
pp. 4298-4309, 2021, doi: 10.1109/JSTARS.2021.3072208.
[2] A. Focsa, A. Anghel and M. Datcu, "A Compressive-Sensing Approach for Opportunistic Bistatic SAR Imaging
Enhancement by Harnessing Sparse Multiaperture Data," in IEEE Transactions on Geoscience and Remote Sensing,
doi: 10.1109/TGRS.2021.3071861.
[3] A. Focşa, A. Anghel, Ş. -A. Toma and M. Datcu, "Synthetic Aperture Radar Focusing Based on Back-Projection
and Compressive Sensing," IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium,
2020, pp. 2376-2379, doi: 10.1109/IGARSS39084.2020.9323775.
[2] A. Focsa, M. Datcu and A. Anghel, "Compressed sensing-based multi-aperture focusing of spaceborne
transmitter/stationary receiver bistatic SAR data," 2020 IEEE Radar Conference (RadarConf20), 2020, pp. 1-4, doi:
10.1109/RadarConf2043947.2020.9266567.
1. INTRODUCTION
Analysis Ready Data (ARD) and data cubes optimize data access efficiency and flexibility towards user needs. There is a need to improve efficiency of most computationally demanding (pre-) processing steps (e.g., atmospheric correction and orthorectification for multispectral images or SAR focusing). It is also important to support algorithms diversity and/or application to different sensors and facilitate large reprocessing/reformatting campaigns, e.g., to test refined algorithms, customize auxiliary files and apply varying data formats. ARD production is not just a one-off exercise, but it may need to be repeated several times to serve varying needs and improved algorithms. An EO bulk-processing framework is essential to support the above and shall respond to the following objectives:
• Flexibility of plugging new algorithms
• Performance and speed
• Scalability and elasticity for cost efficiency
• Data transfer efficiency
• Standard APIs based federation
• Portability across different HW infrastructures
Advances in ICT technology offer a unique possibility to achieve the above goals, especially leveraging graphical processing unit (GPU) and cloud computing resources offered as Infrastructure-as-a-Service (IaaS) for parallel processing.
This activity is being run as an ESA TDE project, which kicked off in May 2021. The activity aims to design, prototype and demonstrate, an as-close-to-operational-as-possible modular bulk processing framework for EO payload data processing, based on a parallel computing approach. A consolidation of users, use cases and requirements, development of a cloud-based implementation for those use cases, and finally, validation of the use cases is being undertaken. The use cases to be implemented on the platform in scope of this activity are existing concepts developed by expert organizations in the consortium, sarmap and University Politehnica of Bucharest. The environment exploits the two most important technological advances for data intensive scenario: GPU-based processing and cloud-based scalability.
2. BULK PROCESSING ARCHITECTURE
GPUs have become increasingly popular for general purpose computing due to their processing efficiency through parallelization capabilities. The variety of existing tools, libraries, and implementations suitable for EO data processing has made the utilization of GPUs for novel EO software increasingly reasonable.
The Compute Unified Device Architecture (CUDA) is a parallel computing platform and API model created and developed by Nvidia Corporation. It is a mix of software and hardware components, where Nvidia CUDA-enabled GPUs allow running parallel code via CUDA software development kit (SDK) application programming interfaces (APIs). It provides a large number of libraries optimized for GPU, including a wide number of mathematical functions that can be effectively leveraged for EO data processing, enabling to significantly reduce the development curve.
Nowadays, many cloud providers offer computing nodes running on Nvidia GPUs. An alternative to CUDA is the Open Computing Language (OpenCL), a framework that enables to execute code on heterogeneous platforms such as CPUs, GPUs, FPGAs, and DSPs. It has a powerful API and a higher abstraction level to support different systems. With the overwhelming usage of CUDA especially in ML applications, OpenCL it is somewhat inferior to CUDA when code is targeted to GPUs only. OpenCL also has a more limited availability and less mature set of corollary libraries and programming instruments when compared to CUDA.
Many EO big data processing technologies have been developed in recent years based on Platform-as-a-Service (PaaS), a layer facilitating integration of new algorithms. Container and container orchestration technology based on Docker and Kubernetes have boosted this. They allow creating cloud-native applications that fully exploit the advantages of cloud computing, including encapsulation of micro-services, autoscaling and resilience.
The proposed architecture combines EOPaaS, the Big Data processing framework developed by CGI that implements “horizontal scalability”, e.g., distributing the processing over multiple nodes, with the “vertical scalability” supported by the GPU, e.g., intensifying the processing on a single GPU node.
3. CANDIDATE USE CASES
Two use cases have been identified in the analysis phase of this activity to test the proposed architecture on real examples. These are presented below.
3.1. PALSAR-1 legacy archive improvement using zero-doppler reprocessing
Level-1 SLC data produced for SAR processing are generated from the corresponding raw acquisitions using Zero-Doppler annotation. The Japanese Space Agency (JAXA) designed the operational processor using an annotation centered on the so-called Doppler Centroid direction. Although this provides more accurate focusing in the end, it does not really improve the quality of the resulting products. Also, the combination of different data takes on a same area processed with this geometry is more complicated, due to the little variations of the look directions of the platform in repeated acquisitions (orbits). This, for example, results in increased complexity in the coregistration software modules. After an agreement with JAXA, ESA took the responsibility for the ALOS-1 European node managing the whole archive. While the data have been processed exploiting JAXA’s processor, a new operational processor has also been developed, based on Zero-Doppler geometry, and data can now be ordered, on-demand, specifically processed using this alternate method.
The development of a highly computing-efficient, GPU-based Zero-Doppler SAR processor would allow performing the re-processing of the whole PALSAR-1 data of the European node, simplifying the further combination and exploitation of the existing archive, increasing its value and interest for the end users.
3.2. Feature extraction for content-based indexing structures and meaningful data cubes
Feature extraction allows describing data content using image characteristics (e.g. coarseness, contrast, color distribution, directionality) indexed using multi-dimensional feature vectors. State of the art literature is proposing a multitude of algorithms sensitive to either spectral, texture or shape information, coping with the diversity of image discernible properties. Some of the most common approaches for feature extraction are based on Gabor filters and they have been proven successful in multimedia, multispectral and SAR data analysis all the same.
Gabor descriptor captures image characteristics with respect to objects’ texture. Based on a wavelet transform, multi scale Gabor filter is described in the MPEG-7 standard. In order to obtain the Gabor image descriptors, the image is subject of a Gabor filter-bank consisting of filters with different frequencies and orientations. The result is a multichannel feature representation of a texture pattern, which is in line with the multi-channel filtering mechanism of the human visual system in perceiving texture information. The parameter “orientation” will highlight certain angle rotations of edges in the scene while the parameter “frequency” will dictate the size of the filter and therefore the size of the edge to be located. For each filter bank, we filter the image and compute the corresponding parameters. The feature vector length depends on the number of orientations and frequencies. The Gabor wavelet depends on a set of 5 parameters (wavelength, orientation angle, phase offset, standard deviation, filter scale), being applied to each level of the filter banks, by changing the orientation angle and filter scale.
Several variations for the Gabor descriptor have been proposed to cope with the particularities of each sensor or intended application. The initial methodology follows a patch based approach, where, the image is first divided into a grid of patches. Further, a Gabor filter bank is applied to each image patch and all bands and mean and standard deviation and computed and stored in the data cube.
This use case will impact both image classification and content based image retrieval in view of semantic annotation.
Parallel computing is one more incentive to bring data closer to the database and facilitate real time processing, in spite of data volume growing rate. This use case fits a current trend in Big Data processing that aims at bringing the algorithms close to data on the cloud.
4. CONCLUSION
Both multispectral and SAR data analysis are dealing with large archives of frequent acquisitions and high-resolution imagery, being in general very demanding in terms of algorithm complexity and computational power required to process the information. As a consequence, dealing with EO data turned into a Big Data paradigm, the GPU-based computation has the potential to become an enabling-technology for many applications, particularly for those in need of real-time response.
A deep learning methodology to retrieve high resolution wind direction from SAR images
This work presents a methodology (Zanchetta and Zecchetto, 2021), which employs deep learning techniques to retrieve the wind direction exclusively from the morphological features of SAR images at an unprecedented spatial resolution (500m). It constitutes of a specifically optimised Residual Network (ResNet) architecture (He et al., 2016), a variant of Convolutional Neural Network (CNN) (Goodfellow et al., 2016). This new methodology allows a reliable wind direction retrieval in the vast majority of situations, and even in absence of wind rolls signatures. The advantages of this methodology for extracting the wind direction directly from the SAR images, without any external information, can be clearly understood considering the following two facts: 1) The knowledge of the wind direction is necessary to calculate the wind speed with the Geophysical Model Functions (GMF), as it is one of the mandatory parameters. 2) Several methodologies, including the ESA OCN (Ocean Level-2 product), make use, directly or indirectly, of wind directions derived from Numerical Weather Prediction (NWP) models.
Employing NWP-based methods can produce significant errors on the wind speed determination (> 2 m/s), caused by the relatively coarse resolution of the NWPs (~< 9 km) which cannot take into account small scale variations of the wind direction, particularly close to the coastline. The ResNet methodology removes all these limitations, obtaining high resolution wind fields with consistent coverage, even in absence of wind streaks. The ResNet consists of 4 residual network blocks, followed by a fully connected network (Goodfellow et al., 2016), and has been trained using ECMWF wind directions as ground truth, on a set comprising slightly more than a million training samples.
Once obtained the wind direction with the ResNet, the wind speed can be computed using the CMOD-7 Geophysical Model Function.
Fig. 1 ( from Zanchetta and Zecchetto, 2021) shows an example of wind field obtained with the ResNet from a SAR image near the coastal areas of Sardinia island, central Mediterranean Sea, under a complex meteorological situation involving funnelling winds and atmospheric gravity waves. The more sparse black vectors represent the ECMWF wind field.
Performing a quality check of the ResNet methodology is made difficult by the absence of suitable data sets to use for a comparison. The available datasets, in fact, have insufficient spatial resolution and scarce coverage close to the coastline, where the orography-induced effects on the wind are stronger. For this reason, an intrinsic robustness test have been devised to estimate the reliability of the methodology, which consists in reshuffling 10% of the pixels of each SAR image portion from which a wind direction is retrieved. The wind direction determinations that are significantly affected by this perturbation of the input are flagged as not reliable and are consequently discarded. As a The validation loss value obtained from the training procedure of the ResNet has been used as a threshold for discriminating the unreliable wind direction determinations. In this way also the threshold is intrinsic to the methodology and free from any arbitrary human choice.
Floods wreak havoc throughout the world, causing billions of dollars in damages, and uprooting communities, ecosystems and economies. The NASA Impact Flood Detection competition tasked participants with predicting flooded pixels after training with synthetic aperture radar (SAR) images in a supervised setting. We propose a semi-supervised learning pseudo-labeling scheme that derives confidence estimates from U-Net ensembles, progressively improving accuracy. Concretely, we use a cyclical approach involving multiple stages (1) training an ensemble model of multiple U-Net architectures with the provided high confidence hand-labeled data and, generated pseudo labels or low confidence labels on the entire unlabeled test dataset, and then, (2) filter out quality generated labels and, (3) combine the generated labels with the previously available high confidence hand-labeled dataset. This assimilated dataset is used for the next round of training ensemble models and the cyclical process is repeated until the performance improvement plateaus. We post process our results with Conditional Random Fields. Our approach sets a new state-of-the-art on the Sentinel-1 dataset with 0.7654 IoU, an impressive improvement over the 0.60 IoU baseline. Our method, which we release with all the code and models, can also be used as an open science benchmark for the Sentinel-1 dataset.
Airbus Intelligence UK has been developing its Advanced Generalized Likelihood Ratio Test (AGLRT) SAR ship detector since 2016. The AGLRT is a fully automated multi-frequency ship detector which can operate with any SAR sources, such as the Airbus Radar constellation (formed by TerraSAR-X, TanDEM-X and PAZ satellites), Sentinel-1 and NovaSAR.
The AGLRT is a major enhancement of the GLRT [1] which has been proven to outperform the Constant False Alarm Rate (CFAR) algorithm [2], [3]. Besides the detection of ships, the AGLRT detector provides also ship features such as length, width, heading, Radar Cross Section statistics, target’s position, azimuth ambiguity removal, smart target clustering aimed to reject and reduce false alarms and to merge close detected clusters.
From 2021, the AGLRT has been operationally employed as the core engine of an automated processing system capable to generate Vessel Detection Reports (VDR) with multiple formats (i.e. geojson and Google Earth kmz for ingestion in GIS software) from the Airbus Radar constellation stripmap and scansar images and ExactEarth AIS data. The end-to-end service timeliness from image acquisition (over Europe with Near Real Time downlink) to VDR product delivery to customer is on average less than 1 hour.
The SAR input image, available via a Google Cloud bucket, is firstly locally downloaded and then processed by using the AGLRT ship detector to produce an initial vessel detection report with the position of the ships and other ships features mentioned above is created. Then, a live and historical ExactEarth Automatic Identification System (AIS) data feed for identification of vessels transmitting AIS is accessed via the Airbus AIS Data Lake (AIS Hub) APIs and a correlation with the vessels detected by SAR data analysis is performed by using the DLR AIS SAR Ship Detection Correlator software. A new VDR derived by the fusion of AIS and SAR detection is therefore created by including identified AIS transmitting (cooperative) vessels and ‘dark’ vessels, i.e. those not transmitting AIS in an time interval of 30 minutes around the SAR acquisition time. For the cooperative ships, AIS information such as vessel type, position, dimension, transmission time, cargo type, IMO, MMSI number is extracted and linked to the SAR ship signature. The VDR is finally uploaded to a Secure File Transfer Protocol (SFTP) link and an email with synthetic analytics (number of cooperative and dark vessels) is sent to the customer team alerting them of the new reports availability.
In figure 1, a sample VDR is shown in Google Earth kmz format. The report was generated from a TanDEM-X Stripmap image acquired on 2nd of May 2021 at 13:07:37 over Indian Ocean. The AGLRT detected 11 vessels, 9 of which were identified as cooperative vessels by the correlator software. In particular: 1 tug, 2 cargoes and 6 tankers were detected.
From a visual inspection, no clear missing target is present in the scene and no false alarm is detected. In addition, 2 first order azimuth ambiguities relative to the cargo #11 are correctly identified and rejected by the AGLRT algorithm. The end-to-end processing chain from data ingestion to email notification generation with VDR took 22 minutes.
Airbus Intelligence UK is currently working on the following areas to improve the quality of VDR, to reduce the number of false alarms and integrate other SAR sensor:
• Identifying and filtering out known permanent features on the sea (e.g. wind turbines, oil rigs, wind farms)
• Using Machine of learning techniques to extract and better quantify vessel geometrical features with the ultimate aim of detecting and classifying non-cooperative and dark vessels without AIS
• Expanding the operational system integrating Sentinel-1 and NovaSAR missions.
In particular, when the Sentinel-1 will be fully integrated in the automated system, Airbus Intelligence UK plans to run a study to understand the performance of the AGLRT and developed Machine Learning (ML) algorithms in particular with respect to the vessels under 15m in length. Preliminary results and comparison with AIS ground truth will be available at conference time and shown during the paper presentation.
The Surface motioN mAPPING – SNAPPING service on the Geohazards Exploitation Platform (GEP) is an on-demand tool for measuring terrain motion based on Copernicus Sentinel-1 mission data. The objective of SNAPPING is to contribute to the optimal use of Copernicus data by simplifying the extraction of InSAR-based displacement measurements to allow focusing efforts on post analysis and interpretation of EO observation for improving our understanding of geohazard phenomena. The service is meant to allow users to easily exploit EO data resources by combining fast data access, hosted processing and flexibility for users’ own data analysis. Since February 2021, where it became operational, it has already been used in several case studies demonstrating the capabilities of platform-based solutions. SNAPPING is currently offered at the medium spatial resolution, where the number of point measurements is being reduced over regions of relatively high density, i.e. urban areas and non-vegetated land surfaces. This allows improved performance when considering wide area processing while maintaining the local diversity of measurements and avoiding the averaging of calculated terrain motion. The conceptual twofold processing of the service separating the generation of the interferometric data stack (SNAPPING IFG) and the time series analysis (SNAPPING PSI) provides flexibility when regular updates of the solution are required, reducing in the meanwhile the consumption of resources and the corresponding processing time. In the current work, we present the evolution of SNAPPING service on GEP, designed based on the collected user experience and requirements. The introduction of full-resolution SNAPPING service is one of the key developments aimed to provide detailed InSAR-based terrain measurements for applications where spatial resolution is critically required, mainly when infrastructures stability and monitoring are addressed. Further developments involve the post-processing of SNAPPING PSI time series to decomposed motion from different acquisition geometries into actual motion components. The extraction of additional information regarding temporal displacement patterns (linear and non-linear components, break-points etc.) is also targeted. Finally, delivery of SNAPPING results as standalone HMTL files is planned to facilitate the inspection of measurements in common web browsers. This is of interest when EO data are disseminated to non-EO experts with limited experience in geospatial visualization tools.
Large scale mapping of Mopane forest degradation factors with AI and Sentinel-1
1,2Tristan Williams, 1Anca Anghelea
1 European Space Agency, ESRIN, Italy
2 Universitat de València, Spain
The urgency to develop methods capable of identifying specific drivers of forest disturbance events is highlighted in UN REDD+ policy. Characterising drivers is essential to understand the complex socio-economic processes that cause forest loss. Charcoal production across Sub-Saharan Africa is ineffectively monitored and regulated [1]. This contributes to the uncertainties surrounding the ecological impact of the industry and makes it difficult to separate the drivers of forest degradation in the region [2]. In addition, this limits our ability to gasp the effects on local processes and the shifting ecosystem dynamics. High spatio-temporal systematic observations of the copernicus Sentinel-1 (S-1) synthetic aperture radar (SAR), with the intrinsic advantages of radar imagers, make it one of the most applicable sensors for detecting small scale forest disturbances. AI and cloud computing on EO Platforms (such as e.g. the Euro Data Cube [3]) enable scalable exploration of deep stacks of SAR data at regional to continental scale.
In this study we demonstrate the potential for using S-1 SAR and other geospatial datasets with the help of AI to produce a methodology for scalable forest degradation monitoring of specific drivers in Sub-Saharan, in an Open Science development framework. To overcome the inherent lack of ground truth data [4], we compile information from multiple geospatial sources (Sentinel-2, NICFI Planet Basemaps, Fire Information for Resource Management System, FNDS Land Cover map [5]) to generate training and validation datasets. Furthermore we use AI to learn and augment the training data for the class of interest (mopane forest) from a detailed expert-annotated land cover product [5]. We harness Sentinel-1 SAR time series data to construct a charcoal disturbance detector using a 1 dimensional convolutional neural network [6]. CNNs are able to learn local patterns within a time series of raw data, making them very interesting for characterising remote sensing signals. The method developed in a previous stage of the study has been expanded and then applied to a multitemporal stack of SAR backscatter images using a kernel to extract time series information per pixel. We produce change maps over the Gaza Province in Southern Mozambique from 2018-2021, and validate our models capacity to detect charcoal production and differentiate between other types of forest disturbances.
These yearly maps bring new insights to the patterns and evolution of production, and allow us to infer conclusions on the sustainability of these practices on a regional scale. Results suggest that there is generally an unsustainable intensity of production across the region, and in some cases illegal practices in protected forests and national parks. Adhering to Open Science principles, the results, data, code and kiln database will be made available openly. A web application to visualise the end map products would then be made available openly for the community to explore the data and support future charcoal studies.
References
[1] Sedano, F., et al. Monitoring forest degradation from charcoal production with historical Landsat imagery. A case study in southern Mozambique. Environmental Research Letters, 15(015001), 2020.
[2] Hosonuma, et al. An assessment of deforestation and forest degradation drivers in developing countries. Environmental Research Letters, 7(044009):1–12, 2012.
[3] https://eurodatacube.com/
[4] https://africaopendata.org/about
[5] FNDS. 2019. Mapa de Cobertura Florestal de Moçambique 2016. Maputo.
[6] T. Williams and A. Anghelea. Characterising Forest Degradation Factors with Sentinel-1: A Case Study of Charcoal Production in Mozambique. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 3109-3112.
The continuous growth of new remote sensing sources, diverse sensor types and the development of recent technologies to analyze and extract the knowledge from this miscellaneous data, have encourage the analysis of the Earth Observation data from different perspectives and the possibility to get around some issues like the lack of labelled samples using complementary data.
As it happens in other fields, the success in the application of machine learning techniques is related to the characteristics of the datasets used to train its models. In supervised learning models, the availability of enough labelled data (in terms of volume, variety, correctness, among other characteristics) is crucial to obtain models with an acceptable performance.
Due to difficulties in seeing and recognizing reference objects in SAR images, the availability of tagged data is often low or non-existent for rare objects. On the other hand, these difficulties, added to the fact that for many classes the presence of instances is low in relation to the size of the analyzed regions, make labelling a dataset from scratch not always an affordable task.
The reverse situation occurs when it comes to optical images, where the amount of existing training data increases every day and the cost of having to assemble a new dataset is notably lower because the images are easily interpreted by the human eye.
In recent years multiple studies have focus on the possibility of ”translating” SAR data to optical data and vice-versa using image-to-image (I2I) translation, i.e., taking images from one domain and transforming them so they have similar characteristics to images from another domain.
In this scenario, the idea of being able to transfer the knowledge contained in a dataset from one domain to other domains is truly tempting. This can become an effective technique in Few-Shot learning problems as augmentation could be done by transforming source domain datasets into destination domain ones with the learned transformers (i2itranslation models) [6].
However, transferred images from one domain to the other are not real. On one hand, the imaging geometry of SAR sensor is completely different from that of optical sensors. On the other, both the reflected echo of SAR sensors (also called backscatter) and the reflected electromagnetic waves captured by the optical sensors provide different information about physical properties of the Earth’s surface. There is no way of transferring all this information from one domain to the other using pixel to pixel translations thus the i2i model only learns to transfer the images from one domain to images that look similar to the images of the destination domain.
Previous research shows that although transferred images cannot be considered as real images due to the lack of information on the origin domain of the destination one, main surface features are also remained in translated images after using unsupervised I2I translation methods to translate images between SAR/optical domains.
Thales Alenia Space has focused on the analysis of how much of the features present in the source domain (optical imagery) are propagated to the destination domain (SAR imagery) by different unpaired I2I translation methods and the usefulness of the translated data for performing tasks like object detection in the destination domain.
During the presentation, the results of this research will be presented.
A written report is available at https://arxiv.org/abs/2112.01873
DInSAR technology provides high density data about the historical and up-to-date ground and infrastructure deformation movements without the need of ground instrumentation. This technology is based on the analysis of the radar phase information and amplitude of a series of space-borne Synthetic Aperture Radar images. These images are acquired over large areas at different dates to obtain the deformation time series on points showing low phase noise. It allows measuring surface deformation before, during and after a specific temporal point of interest with a very competitive cost compared to in-situ instrumentation. Also, DInSAR technology possesses great advantages that complement traditional techniques measuring millimetric ground and infrastructure movements. However, the large volume of information that DInSAR provides makes manual inspection a complicated and subjective task. Appropriate post processing methodologies are needed to properly interpret DInSAR results facilitating the objective interpretation of the results and supporting decision-making.
Clustering is an unsupervised machine learning technique based on the division of huge datasets into groups based on a similarity metric to attempt to extract implicit relations from the data. Spatial clustering is an extension of clustering that geographically groups data in this case by comparing the time series obtained on different locations. Spatiotemporal clustering groups the deformations that happened between two satellite images at different places based on spatial and temporal similarities. This paper addresses DInSAR interpretation problems by analysing the results obtained by applying different metrics to DInSAR time series in diverse real case scenarios using spatial and spatiotemporal techniques and then creating a systematic workflow useful for a preliminary analysis of the data. Our results show how clustering techniques allow us to separate the different patterns of ground deformation that affect the analysed areas. Results also show that prior information on the study area is not needed when applying these unsupervised clustering algorithms.
The Euregio Meuse-Rhine, border region between the Netherlands, Belgium & Germany, has been considered as a possible location for the further Einstein Telescope because of its tranquility, stable ground and cutting-edge scientific institutions and companies. Whilst the E-TEST Interreg project (www.etest-emr.eu/) is running an underground study to map and model the geology of the Euregio Meuse-Rhine region, we aim to analyse meteorological, seismic and anthropogenic changes which may affect Earth surface stability. The purpose of this study is also to differentiate local and regional ground uplift, and sub-regional subsidence induced by groundwater level drawdown, possibly enhanced across fault structures, as monitored by various Synthetic Aperture Radar Interferometry (InSAR) processing methods. Indeed, a buoyant mantle plume under the Eifel could be responsible for the regional ground uplift including the Weser-Geul region (BE) and the South Limburg region (NL) – and thus would also affect the area selected for the future Einstein Telescope installation. The seismic activity of faults in the target area is not well known but is assumed to be associated with the presence of several active NW-SE trending normal faults. Karst also develops along these faults and along the NE-SW trending thrust faults are old (Variscan) inactive structures. However, identifying deformation hazards (including karst formation) using satellite remote sensing (and connected seismological) techniques is challenging mainly due to the (very) small regional scale deformation, terrain conditions and SAR properties.
Two classes of techniques have been proposed for ground movement monitoring by multi-temporal DInSAR: Persistent Scatterer Interferometry (PSInSAR) and Small Baseline Subset Interferometry (SBAS). They require the presence, over the region of interest, of scatterers that remain radiometrically stable and maintain a detectable and stable phase signature during the time period covering the successive acquisitions. In this study, we propose to use different processing techniques to track such scatterers on a time series of images and, based on their phase signature, to detect and measure ground motions. Both PSInSAR and SBAS allow to estimate the projection of the displacement rates onto the line-of-sight direction whereas additional measurements (including GNSS data) and processing (Multidimensional SBAS, MSBAS) are needed to estimate movement components in up-down and east-west directions in a timely manner. Except for some cropping activities, preliminary results confirm the general ground stability over the area. Sentinel-1 images are considered as “standard” data sets over the area of interest. However, adding higher spatial resolution considering RadarSAT-2 images (acquired by the Netherlands Space Office) may help to better determine the regional ground uplift of very small amplitude, and local, typically larger but also more variable, subsidence processes.
The potential of the Persistent Scatterer Interferometry (PSI) technique has been recognized since it was first proposed [4], and, in the last fifteen years, a wide range of PSI applications has been developed [5]. Thus, it is undoubetly a powerful technique which can remotely measure sub-centimeter-scale deformation over spans of days to years based on satellite SAR data analysis.
In 2020, the Sentinel Data Access System was supporting a daily publication rate of over 38.700 products/day, and an average daily download volume of 405 TiB [1]. Limited 'in-house' or grid resources typically represent a bottleneck problem for fully exploiting this satellite data archives and in this scenario, a key role can be played by the Cloud Computing technologies [2,3].
In particular, an unprecedented data flow is supplied by the C-band Sentinel-1 (S1) mission of the European Copernicus program that is composed of two twin SAR satellites, Sentinel-1A and 1B, launched on April 2014 and April 2016 respectively.
The main S1 acquisition mode on land, the so called Interferometric Wide Swath (IWS) that guarantees a very large spatial coverage and a revisit time of 12 or 6 days in the case of one or two operating satellites, respectively.
Multiple S-1 SAR images acquired over the same area, and thanks to appropriate data processing, it is possible to separate the phase displacement contribution from the other phase components. Thanks to the availability of long S-1 data time-series collected from 2014, the interest of the scientific community has significantly moved toward the study of the temporal evolution of the detected deformations.
The full exploitation of these data archives needs effective solutions to deal with the transfer, the storage and the processing of this precious information.
The adopted methodology is based on an advanced cloud computing implementation of a PSI processing architecture which allows automatic processing of large SAR data volumes, from the Level-1 Interferometric Wide Swath (IWS) Single Look Complex (SLC) product up to the generation of the corresponding DInSAR displacements time series and mean deformation maps.
The architecture, also thanks to proprietary Artificial Intelligent (AI) algorithms, is able to automatically select processing parameters and process an Area Of Interest (AOI) which could be some hundreds of thousands of square kilometers wide in a timely manner as well as reducing the cost.
In fact, the architecture improves the amount of time and resources spent either in file/orbit data downloading and relevant long lead time data processing. The architecture applies multiple and standalone analysis scripts and creates customized processing pipelines by defining processing flow based on the use-case in order to combine data fusion, enrichment and analysis steps in a customized way.
In the processing pipeline every single task is completed by an independent component that execute a single job as a service; this approach simplifies the management of the complexity of the processing flow. The services communicate through asynchronous messages making each service independent from the other. Each service is hosted on a scalable Dockerized installation and this gives the possibility to increase or decrease the total processing power of the pipeline on demand.
[1] Copernicus Sentinel Data Access Annual report 2020, https://scihub.copernicus.eu/twiki/pub/SciHubWebPortal/AnnualReport2020/COPE-SERCO-RP-21-1141_-_Sentinel_Data_Access_Annual_Report_Y2020_final_v2.3.pdf
[2] Wang, Lizhe & Yan, Jining & Ma, Yan. (2019). Cloud Computing in Remote Sensing. 10.1201/9780429488764.
[3] M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen and Y. Zhu, "Big Data for Remote Sensing: Challenges and Opportunities," in Proceedings of the IEEE, vol. 104, no. 11, pp. 2207-2219, Nov. 2016, doi: 10.1109/JPROC.2016.2598228.
[4] A. Ferretti, C. Prati and F. Rocca, "Permanent scatterers in SAR interferometry," in IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 1, pp. 8-20, Jan. 2001, doi: 10.1109/36.898661.
[5] Michele Crosetto, Oriol Monserrat, María Cuevas-González, Núria Devanthéry, Bruno Crippa, Persistent Scatterer Interferometry: A review, ISPRS Journal of Photogrammetry and Remote Sensing, Volume 115, 2016, Pages 78-89, ISSN 0924-2716.
Since the launch of the Sentinel missions, the amount of freely accessible satellite imagery is continuously increasing. A number of operational services are exploiting this data availability. One such service is the Ground Motion Service Germany (BBD), which delivers highly precise multi-temporal ground motion products on a nationwide scale, based on Sentinel-1 Persistent Scatterer Interferometry (PSI) data. The current dataset contains more than 140 million measurement points (persistent scatterers) each with an associated deformation time series covering a period of more than five years (2015-2020) and the whole territory of the Federal Republic of Germany. Automated and semi-automated approaches are needed for the systematic post-processing and analysis of these large data volumes.
In this work, the use of clustering algorithms for the analysis of BBD PSI data from Northern Germany has been investigated. The goal of the analysis is to automatically identify and group time series with similar temporal and spatial characteristics and achieve an automatic and transferable process to evaluate the main ground motion patterns present without the need of a-priori knowledge. A two-step approach has been followed with 1) a dimensionality reduction step and 2) a clustering step. For the dimensionality reduction step, an Autoencoder-based approach is followed. An Autoencoder is a type of neural network that learns latent representations of input data. These features are then used as input for the clustering step.The resulting clusters are statistically analysed and characterised using ancillary data on land use and soil type. The proposed workflow is applied to different study areas with similar prevailing ground motion patterns to evaluate its transferability and the consistency of the results.
The approach does not exclude any persistent scatterers a-priori and makes use of the majority of information contained in the PSI time series. The resulting clusters are a good representation of the prevailing deformation trends in the areas of study. Several clusters show a correlation to soil type or land use. Results indicate that using a time series based approach leads to more detailed results than focussing only on the velocity field. At the same time, the dimensionality reduction and clustering steps lead to a dataset that is easier to interpret than the high-dimensional, raw, time series.
Within the frame of imaging radiometry by aperture synthesis, we propose an alternative approach to the algebraic inversions of complex visibilities. This new approach, which is based on Deep Learning techniques, yields significantly better performances from the imaging point of view. In particular, the retrieved brightness temperature maps are totally free from any alias contamination, thus providing a wider synthesized field of view.
Different deep neural network architectures were tested combining fully connected layers followed by a contracting and expansive path with skip connections. The first layer is fully connected to the input visibilities and creates a first representation of the brightness temperature distribution by learning the relationship between the two spaces. The contracting path is a succession of convolution layers , each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for down sampling. The number of feature channels is doubled at each down sampling step. The expansive path consists of the repeated application of transposed convolutions with stride 2 to up sample the feature maps, followed by convolution layers and a ReLU operation. The number of channels is halved at each up sampling step. The contracting and expanding paths are confined to the brightness temperature space. Their task is to build a mask that will be added to the first layer output, for the purpose of correcting reconstruction imprecision like sharpening edges or smoothing homogeneous area.
Simulations have been performed within the frame of the ESA SMOS satellite: a Y-shaped antenna array equipped with 69 elements spaced every 0.875 \lambda. Complex visibilities have been simulated for a huge number of scenes covering the 12 months of a year so that seasonal variations are included in the dataset. They have been inverted with the aid of the algebraic regularized approach implemented in SMOS operational ground segment as well as with this new approach after a training step over 60% of the snapshots. 20% of the snapshots were used for the validation step and the last 20% of the data were used to test the model's performances and do comparisons with the algebraic method. Significant improvements have been observed in the retrieved brightness temperature maps, whatever the type of scene, from pure ocean to mixed ocean/land snapshots. The mean absolute reconstruction error can be divided by a factor 5 compared to the level obtained with the algebraic inversions. More remarkably, all the reconstructed maps are totally free from any signs of aliasing. This last property makes the field of view synthesized by the interferometric array wider, with twice usable pixels, compared to that obtained with algebraic approaches. This property could be taken into account in the design of future imaging radiometers by aperture synthesis such as SMOS-HR (Cheymol et al. this symposium) since the spacing between antennas could no longer be a parameter constrained by the aliasing of reconstructed images.
The aim of this presentation is to present in detail this new inversion method with the proposed model and sight some of the reconstruction properties the approach brings to synthetic aperture imaging radiometry.
Satellite radar interferometry (InSAR) based water vapor maps have the potential to be used in Numerical Weather Prediction (NWP) models (Hanssen et. al., 1999, Hanssen, 2001). A single interferogram contains a combination of two atmospheric states, which are largely dependent on the horizontal water vapor distribution. Using a network of interferograms, an estimate of the atmospheric signal per SAR acquisition can be obtained. Previously, the temporal resolution of SAR acquisitions was in the order of weeks, which made implementation of this data source in NWP modeling difficult. However, with the launch of the Sentinel-1 satellites an almost daily coverage can be obtained for mid-latitudes. Hereby, constraining NWP models based on InSAR data becomes feasible.
Unfortunately, the integration of high-resolution datasets with a highly dynamic measurement like water vapor, is difficult to assimilate using currently used assimilation techniques, which are often based on 3 Dimensional Variational (3D-VAR) techniques. The main reason is that assimilation takes place at one specific time within a fixed time-window of a few hours, which makes misalignment of water vapor patterns due to time difference a large concern. Partly this could be solved by the introduction of more advanced assimilation schemes using 4-Dimensional Variational techniques (4D-VAR) (Gustafsson et al., 2018), because it assimilates observations at the appropriate observation time instead of a fixed time within a time window. Also, it includes an iterative procedure, where 4D-Var seeks for a model solution that fits the observations as best as possible during a certain forecast range.
However, in many cases the timing of the NWP model itself is off, which can for example lead to shifts of weather fronts. While small shifts can theoretically be mitigated by a 4D-VAR scheme it can not account for the time shift we generally observe. Therefore, we want to argue that the inclusion of time shift correction methods in NWP model assimilation techniques can be of large benefit for NWP model performance. In this research we show how time shifts alone can have a large impact on NWP model performance. This is done using a time-series of Sentinel-1 InSAR images combined with a time-series from HARMONIE, the current operational model in 10 European countries.
In our contribution, we present our methodology to estimate the time-shift between InSAR and Numerical Weather Prediction and show the effectiveness of the approach. The results are based on a case study over The Netherlands, with dynamic weather conditions.
- Gustafsson, N., Janjic, T., Schraff, C., Leuenberger, D., Weissmann, M., Reich, H., Brousseau, P., Montmerle, T., Wattrelot, E., Bucanek, A., Mile, M., Hamdi, R., Lindskog, M., Barkmeijer, J., Dahlbom, M., Macpherson, B., Ballard, S., Inverarity, G., Carley, J., Alexander, C., Dowell, D., Liu, S., Ikuta, Y., and Fujita, T., 2018. Survey of data assimilation methods for convective-scalenumerical weather prediction at operational centres, Quarterly Journal of the Royal Meteorological Society, 144, 1218–1256, https://doi.org/10.1002/qj.3179.
- Hanssen, R. F., 1999. High-Resolution Water Vapor Mapping from Interferometric Radar Measurements, Science, 283, 1297–1299, https://doi.org/10.1126/science.283.5406.1297.
- Hanssen, R. F., 2001. Radar Interferometry: Data Interpretation and Error Analysis. Kluwer Academic Publishers, Dordrecht.
Remote sensing is a potential tool to monitor the presence of plastic on the ocean surface. Particularly Synthetic Aperture Radar (SAR) offers a consistent all-weather, all-day, wide-swath data source. Artificial intelligence algorithms can learn to find patterns in the satellite signals that lead to the automatic detection of plastic patches. This presentation reviews the outcomes of the SAR research for marine litter detection carried out by Lobelia in the last three years using artificial intelligence techniques. In the first stage, the novel pixel-based detection and classification of the marine litter approach based on SAR resulted in a detection module able to detect floating patches on the ocean surface by feeding machine learning models with 27 different features of the SAR signal. In the second stage, another machine learning-based module discriminates materials between plastic wood and others. Last, the whole automated approach, including a SAR Analysis-Ready-Data module, is integrated into a system for detecting and tracking large patches with Hi-Res, SAR images and ocean currents models.
Results of this work show the potential of the SAR technology in combination with other remote sensing technologies to address marine litter at sea from space. Furthermore, this study demonstrates how a collaborative approach can increase the number of reliable observations, allowing more testing and more accurate results in the face of the lack of reliable ground truth in-situ data for a complete generalised analysis.
The alteration of roof materials can be a potential source of pollution with negative effects on the environment and on human health. For instance, analyses of runoff water revealed high levels of metal traces but also polycyclic aromatic hydrocarbons and phthalates. Similarly, the alteration or combustion of asbestos contained in certain types of roofs may allow the emission and dispersion of asbestos fibres into the environment. In addition, defective photovoltaic panels can cause roof fires. Therefore, acquiring information on roof materials is of great interest to assess risk. To this end, remote sensing is a particularly relevant tool. This research aims to develop a semi-automatic identification tool of roofing materials over Liege (Wallonia, Belgium) using remote-sensing and machine learning for public authorities.
For this purpose, a state-of-the-art object-based supervised classification processing chain was adapted and applied over a 25 km² area which accounts 73000 buildings. This processing chain is based on an integration of GRASS GIS and Python programming environment and is divided into the following steps: (1) data pre-processing, (2) creation of objects through images segmentation, (3) hand-labelling and preparation of training and test samples, (4) computation of statistics by object, (5) classification of objects, (6) performance evaluation. This processing chain was enhanced by including 9 textural indexes (step 1) and by performing VSURF feature selection during the classification processing (step 5).
Input data consists of both spectral and ancillary data. As spectral input, a WorldView-3 image acquired on May 2018 and characterized by a 30-cm spatial resolution and 8 spectral bands between 400 and 1040 nm was used. Ancillary data consists of the construction year of cadastral parcels and function of the buildings extracted from cadastral data (2018), mean height, area and number of flat roof surface per building extracted from a 2018 DSM of Wallonia. The number of flat roof was computed thanks to the RANSAC algorithm implemented in a FME workbench. The processing chain was applied by considering 13 classes of roof materials defined by combining an expert approach and the analysis of a 26-reference-spectra library we build on Wallonia. These classes are: black, brown and orange ceramic tiles, black and white membranes, natural slates, artificial slates in asbestos cement, corrugated asbestos cement sheet, metal, gravel, vegetation, solar panels, and another class including plastic roofs, roof windows dataset of 2560 roof samples identified and geolocalized was used for the training (100 samples per class) and the evaluation of the classification processes.
In the end, by comparing the results to a previous work performed on RGB+NIR data and without flat roof identification, the overall accuracy of the classification has significantly improved. The improvement is most spectacular in the black and white membranes as well as gravel that are all characteristic of flat roofs. The addition of spectral information has generally resulted in an improvement in all classes. However, some classes remain very difficult to identify with an accuracy of less than 70%. The distinction between asbestos and non-asbestos slates remains complicated.
This study illustrates the relevance of combining spectral and ancillary information as input to a state-of-the art machine learning for roof materials identification in very dense urban context. The importance of a high spectral richness in the data is highlighted. Some additional spectral bands would be relevant for asbestos detection but their cost and their spatial resolution are limiting factors. The added value of ancillary data is also brought to light. Acquiring more precise ancillary data such as the age of building and building materials mainly used in those periods can help to refine our results. The use of deep learning is also a prospect for the detection of specific roof materials such as solar panels.
Bangladesh lies almost entirely in the Ganges-Brahmaputra river delta, the largest delta in the world. A long coastline and wide floodplains, covering almost 80 % of the land area, characterize the densely populated country. Hundreds of rivers, frequently changing their courses, shape the landscape.
Seasonal precipitation and cyclones, as well as rising sea levels, lead to seasonal large-scale inundation. These climate-driven circumstances put additional strain on people in the rural areas of the country, where the majority of Bangladeshis live. According to a 2018 USAID report, more than the half of the Bangladesh population lives in “high climate exposure areas”.
Therefore, inland migration movements towards the cities drive a steady urbanization and consequently a fast growing demand for building ground.
In order to identify safe building ground suitable for urban development, urban planning institutions need to have access to reliable, up-to-date and easily understandable geodata. The German-Bangladesh multidisciplinary technical cooperation project “Geo-Information for Urban Planning and Adaptation to Climate Change” aims to establish workflows for the systematic integration of geodata into the urban planning procedures of the pilot cities of Barisal, Faridpur, Khulna, Kushtia, Satkhira and Sirajganj.
Remote sensing data enable the monitoring of ground-motion, inundation, river course changes and land-use.
InSAR-derived multi-temporal ground motion maps allow for the identification and characterization of potential building ground as well as the monitoring of existing buildings and infrastructure. Strong observed subsidence is often linked to housing and industrial developments or river erosion processes. Low-lying floodplains show on average stronger subsidence rates than the slightly more elevated natural levees.
To identify frequently inundated areas, a threshold approach based on Copernicus Sentinel-1 data is applied. Low-lying floodplains and settlement structures near rivers often show yearly inundation. Areas lying further away from rivers see less frequent inundation.
In order to detect former infilled riverbeds, river courses are monitored using optical satellite images since early 1970s until today. These data also provide information on the erosive movement of current rivers towards settlement and urban structures. Results show strong river course movements of the major rivers Ganges and Brahmaputra within the last decades (see Fig. 1a).
In particular, the formation of new river branches involves large losses of agricultural land and settlements, forcing the population to build new livelihoods.
To detect inundation-prone land-use units, inundation data are linked to a recent land-use classification. Investigations show, agricultural land is by far the most frequently inundated, while residential areas are less affected.
Combining the approaches of ground-motion and river-course monitoring allows to connect the motion of larger urban areas with the development of nearby rivers (see Fig. 1b).
The results show that synergies of different remote sensing products enable a comprehensive view on geo-hazards, building ground suitability and the underlying processes within a planning area. They therefore play an important role in the context of climate-change-adapted urban planning.
All analyses are conducted using free-of-charge remote sensing data (primarily Landsat and Copernicus data) to remove barriers of adoption of the proposed methods in the urban planning community in Bangladesh.
Figure 1: In Faridpur, strong subsidence is observed near a river branch that developed between 2000 and 2019. Synergies of remote sensing techniques enable a comprehensive understanding of the factors and processes relevant for the identification of safe building ground.
Consecutive mapping of polycentricity in urban development (PUD) is important for identifying and measuring newborn city centers. Current PUD studies are usually limited by the coarse spatio-temporal resolution of the data source. For instance, the LandScan High Resolution Global Population Dataset has a 1 km spatial resolution and is collected on an annual basis. A point of interest (PoI) offers near-real-time observations but is resampled at a coarser 3~5 km spatial resolution. Nowadays, Sentinel-1 missions deliver high-resolution SAR (Synthetic Aperture Radar) data of a 10 m spatial resolution, and a 3-days revisit cycle over the entire Earth. Methods investigating PUD dynamics are therefore needed to be able to use the current high-resolution spatio-temporal data source. This study presents a new method exploring the use of Sentinel-1 SAR imagery for mapping PUD dynamics.
Our method starts with extracting SAR features, such as the backscattering coefficient, the total scattering power, the difference intensity, the power ratio, and the H-∂ decomposition. These are used to differentiate the built-up areas from the other land cover types. A support vector machine (SVM) is employed as a classifier. To identify PUD changes, we focus on built-up areas and use the customized log-ratio and KI (Kittler and Illingworth) methods. By carefully setting a threshold, we hypothesize that the changed areas (patches) with positive and negative reflections on PUD-related building increase and decrease respectively. In particular, we propose to use the mean distance, minimum, maximum, and mean patch area as physical factors to map the PUD dynamics. Our experiment was conducted on the Google Earth Engine platform. As such, we got access to the SAR data, applied cloud computing and improved the computational efficacy.
We demonstrated our method with a case in Shanghai, China, where we used 304 Sentinel-1A/B and 523 Sentinel-2A/B images between 2015 and 2018. The results showed 1528, 1211, 1484 positive change patches for 2015-2016, 2016-2017, 2017-2018, with areas equal to 36, 32, 43 km2, respectively. Validated with optical reference observations, we found that the accuracy of our results equals 81.96%. As concerns the PUD dynamics between 2015 and 2018, most existing subcenters are stable and slightly sprawling (16.5%), while the newborn subcenters have relatively fast construction development (up to 28%).
From this study, we conclude that our proposed method efficiently measures PUD dynamics and provides a novel perspective on a high spatio-temporal resolution.
Three quarters of humanity currently lives in cities and towns and it is estimated that this trend will continue unabatedly. The urban areas on our planet have a manifold and far-reaching impact on our environment, and many important quantities scale linearly with their vertical form. As an example, building height has been shown to be an important indicator for estimating energy consumption, material stock allocation, greenhouse gas emissions, or the distribution of population.
However, openly accessible 3D information at high spatial resolution is still missing at large for complete countries or regions. We will present a novel approach based on Earth Observation image archives to estimate building height on a 10 m grid, which resolves built-up structures in rural and urban contexts (Frantz et al. 2021). Our method utilizes information from the spectral/polarization, temporal and spatial dimensions of Sentinel-1A/B and Sentinel-2A/B time series by combining band-wise temporal aggregation statistics with morphological metrics. We trained machine learning regression models with highly accurate building height information from several freely available 3D building models. We applied the prediction to the whole of Germany.
Our results indicate that both radar-only and optical-only models can be used to predict building height, but the synergistic combination of both data sources leads to superior results. When testing the model against independent datasets, very consistent performance was achieved (frequency-weighted RMSE of 2.9 m to 3.5 m), which suggests that the prediction of the most frequently occurring buildings was robust. It is also noted that saturation effects prevent the accurate prediction of very tall buildings (ca. > 30 m).
The average building height varies considerably across Germany with lower buildings in Eastern and South-Eastern Germany and taller ones along the highly urbanized areas in Western Germany.
The novelty of this method lies in the very fine resolution yet large spatial extent to which it can be applied, as well as in the use of the building shadow phenology in optical imagery. We emphasize the straightforward applicability of this approach on the national scale. It mostly relies on freely available satellite imagery and open source software, which potentially permit frequent update cycles and cost-effective mapping that may be relevant for a plethora of different applications, e.g. physical analysis of structural features or mapping society's resource usage. Due to the employed open data / software principles, our method shall be transferable to other regions, for which we will include preliminary results on mapping the building height of other countries, too, incl. the Conterminous United States and the British Isles.
D. Frantz, F. Schug, A. Okujeni, C. Navacchi, W. Wagner, S. van der Linden, and P. Hostert (2021): National-scale mapping of building height using Sentinel-1 and Sentinel-2 time series. Remote Sensing of Environment 252, 112128. https://doi.org/10.1016/j.rse.2020.112128
ASDE is developing a user-oriented innovative tool for state and local administration, targeted to sustainable management and strengthening resilience, especially in urbanized areas. It can be applied also in agriculture and nature environment, including coastal areas. It is based on long term research&application on integrated EO and in-situ data assessment and interpretation, using classification tool a UN-FAO LCCS and ISO 19144-2.
Specific descriptive basic elements ("urban-bricks") are applied to provide clear representation of different data layers from a same geographical point, as example you can have 1st layer-grass; 2d layer-building; 3d layer-infrastructure. This method was developed with a consultation of experts from UN-FAO, JRC-IES and MARS units(2005-2013). It is in accordance with the JRC-MARS unit "tegon" model.
Aiming the use of SENTINEL1-2 data and user-efficient results, we have integrated LC/LU assessment with legally applied urban functional zones. Successful realization are: Cross-border SDB between Bulgaria and Romania - 72 000 km2 (2013), several Municipal Territorial Master Plans ( 2015-19);
Parallel, we have developed a so-called Exposure and Loss municipal data set matrix, supported by a digital management supportive RkFMEA method for monitoring and preventive resource management of disaster risk called Risk Manager. It is base on FMEA (Failure Mode Effects Analysis) methodology and the application developed for the management of risk and preventive measures - Rk.
Recently under an ESA-PECS project, we have developed ESA/GAF-DSM based risk prevention solutions, for Sofia municipality, integrating SENT 1 and 2 data and applying the so-called "urban-brick functional zones element for LC/LU analysis.
The new Google Dynamic World application is also under assessment of its potential support to general monitoring of changes/change by monitoring(CbM) method and using SENTINEL data.
ASDE is a partner to a HORISON 2020 project (22 EU+UK partners) for the Development of Support System for Improved Resilience and Sustainable Urban areas to cope with Climate Change and Extreme Events based on GEOSS and Advanced Modelling Tools. Under this project one of the main expected results is the "Integrated Resilience Assessment Platform (IRAP), a system that allows stakeholders to model a range of planning options against a number of CC scenarios towards targeted applications in order to mitigate CC effect in urban areas, helping deliver resilient cities". ASDE is responsible organization for the Bulgaria pilot case - the capital city of Sofia.
An imperative criteria on the successful acceptance of any R&D innovative tool be the end-users, is the balanced approach on its efficiency-efficacy and economy. It must be applied also with the support of financial and administrative actualized mechanisms. The report will present some solutions, especially for Bulgaria, for trans-border countries and the South-East European region.
The achieved results as well as the developed risk prevention and strengthening sustainable urban management and resilience will be presented during LPS22
Although cities cover only 2% of earth surface, the majority (54%) of world’s population lives in urban areas. This number is expected to rise to nearly 70% by 2050, as global migration from rural areas continues . According to Eurostat about 75% of European citizens already live in urban areas. The influence of cities on anthropogenic Climate Change (CC) and the detrimental effects of CC on the wellbeing of citizens are substantial. As CC intensifies, it severely impacts (and is impacted from) the urban environment: urban greenness loss, urban flooding, reduced Air Quality (AQ) and increased Green House Gas (GHG) emissions, geo-hazards (landslides and ground deformation due to soil degradation), heat islands, urban heat fluxes, etc.
As a consequence, urban resilience has gained a paramount importance. The repercussions of each crisis depend on the city’s preparedness to respond to specific predictable impacts. As such, cities are taking steps towards becoming more resilient to protect their residents and assets, but also to remain functional during crises, fostered by global agreements and local policies or regulations on CC. In order to address these major issues, local authorities and other stakeholders need a solid framework and a set of tools to predict and mitigate CC effects, and to provide estimated impacts of their planning in order to design more resilient cities.
HARMONIA aims to provide a holistic decision support system to quantitatively and qualitatively assess climatic parameters that directly affect European cities and citizens. Delivery of this data, information and knowledge provides support to governmental bodies and municipality authorities to properly adapt their short- and long-term policies towards deploying sustainable and resilient master plans/programs and providing efficient damage prevention and mitigation. These data additionally provide complementary support to the citizens and investors (real estate, investment banks, insurance companies, etc.) and promote the economic growth and sustainable development of the cities.
HARMONIA leverages existing tools, services and novel technologies to deliver an integrated resilience assessment platform working on top of GEOSS, seeing the current lack of a dedicated process of understanding and quantifying Climate Change (CC) effects on urban areas using Satellite and auxiliary data available on GEOSS and other EU platforms. HARMONIA solution is tested on four pilot cities around Europe (Milan, Ixelles, Piraeus and Sofia) demonstrating diverse urban environments, characteristics and challenges.
HARMONIA Integrated Resilience Assessment Platform (IRAP)
Incorporates multi-disciplinary knowledge and assessments
Addresses hazards connected to Climate Change (CC)
Addresses geological hazards (geohazards)
Identifies intervention necessity indices
Website: http://harmonia-project.eu/
The HARMONIA project has received funding from the EU Horizon 2020 research and innovation programme under agreement No. 101003517.
China is one of the world's richest countries in terms of coal resources, and coal mining is one of the major causes of ground subsidence. Ground subsidence caused by coal mining usually occurs in two ways: continuous subsidence or ground collapse. Continuous subsidence caused by coal mining can seriously affect the ecological environment and human life; ground collapse is extremely dangerous and usually occurs without warning. The western Loess Plateau and its transition zone is one of the major coal production bases in China, with long surface ravines, complex and varied landforms, scarce water resources, and fragile habitat and ecological environment. Under the influence of subsidence and deformation caused by large-scale underground coal mining, surface buildings will inevitably be damaged, seriously endangering the safety of buildings and normal life of residents in coal mining subsidence areas. At present, the main ways to achieve safe coal mining under the village settlement include infill mining, strip mining and construction reinforcement. Since the natural villages in the Loess Plateau are scattered all over the mining area, and most of them are kilns or brick houses with low construction intensity, the above-mentioned safe coal mining methods will significantly reduce the production cost, operation efficiency and resource utilization rate. Therefore, most of the current solutions to the coal mining problems in the village area adopt the way of village relocation, i.e., relocating the village to a planned area for centralized resettlement before mining and leaving a certain range of protective coal pillars in the underground coal seam to ensure the safety of the newly selected site area. Most of the coal under the village is relocated by house to achieve safe mining. The livability of coal mine village siting area is influenced by many geographic and human-social factors, and there is a lack of universally applicable evaluation methods.
In this paper, we use the geographic detector to take the relocation of villages in Shaanxi Dafosi coal mine collapse area as an example, divide the livability into two parts, namely, the habitability factor and the geographic factor, introduce the concept of the habitability factor as a mediating variable, select the habitability factor (X) of the existing villages in five aspects, namely, living environment, ecological health, infrastructure, public facilities, and economic development, and calculate the corresponding weights of each index using the Telfer scoring method and the entropy weighting method to construct In this paper, we select geographic indicators such as slope, slope direction, surface water system, groundwater level, transportation, education, medical care, procurement and tourism as independent variables (Z) to analyze the driving force (q) and quantitative attribution for the complex geographical environment and coal mining subsidence characteristics of loess mining areas. . Based on the overlay analysis function of ArcGIS, the independent variables with larger q values are overlaid to obtain the geographic livability grading map of the mining area; the floor area within the coal mining collapse area is counted, and the optimal site location for village relocation is analyzed and determined by overlaying the house nuclear density analysis map.
Since there is little research work on relocation site selection in China, and there is no national policy to plan and guide the relocation process, local governments have not established a system to solve the relocation problem and local laws, so there is no regulation to follow for the actual relocation process. Relocation site selection focuses on the study of livability, which is a concept related to many areas of the living environment, including the natural ecological system, the socio-cultural system and the regional spatial system. The study of livability should be divided into the selection and determination of index system and the establishment of evaluation model. The purpose of this study is to evaluate the relocation methods of villages in coal mining collapse areas on the Loess Plateau and the applicability of site selection. The study area consists of 37 villages within 10 KM of Dafosi mining area in Binzhou City, Shaanxi Province. Firstly, the concept of "intermediary variable" is introduced as habitat factor, secondly, the evaluation index system is established, the habitat degree is obtained according to the entropy weighting method, the relationship between geographic factors and habitat degree is explored by using geographic probe, and finally, the results of site selection are obtained by superimposing the kernel density of houses analysis map to obtain the site selection results.
The results of this study are as follows: (1) a new idea of livability evaluation is proposed, and the concept of intermediary variables is introduced to determine the relationship between habitability factors and geographic factors; (2) for the first time, a geographic probe is used to detect the factors and obtain the results for the relocation of villages in mining areas; (3) the relocation of sites in the study area is effectively completed, and the research idea has good applicability in the relocation of villages in coal mine subsidence areas on the Loess Plateau. This study can provide technical support for the relocation planning of villages in coal mine subsidence areas.
Dynamics of accelerated urbanization and the increasing demands on urban planning are a major challenge for urban development. Trends in demographic change add further tension to the overall situation, affecting all of Europe’s diverse societies. For instance, shrinking and ageing processes are taking place unevenly in individual regions of Germany and Europe. All these trends are exacerbated by an increase in climate-related risks for the urban population, such as urban heat islands. Therefore, specific climate adaptation and resilience measures need to be effectively designed and implemented at regional and local levels. Climate and environmental databases are critical for achieving the Sustainable Development Goals (SDGs) and for planning and realizing appropriate and sustainable adaptation measures. Federated and distributed databases can serve as necessary starting points for municipalities in urban settings to identify needs, prioritize resources, and allocate investments, often within tight budget constraints. High-quality geospatial data are often available, ranging from remote sensing and environmental datasets to climate projections. For example, Copernicus information and services are an excellent initial point in this regard. There are forward-looking approaches to derive forecasts from these datasets that can be used to optimize urban planning processes for municipalities. However, the existing data is still used only to a limited extent at the municipal level. There is a lack of adequate urban planning tools to merge remote sensing data with local data, to combine them in a meaningful way, and to process them further for the use in municipal planning and decision-making.
Our project CoKLIMAx aims to develop novel cloud-based digital products, advanced urban services, and processes, such as the development of practical technical tools that capture various remote sensing and in-situ datasets for validation and further processing. CoKLIMAx will develop a scalable, modular, and thus flexible toolbox for urban planning to increase climate resilience, using primarily nature-based solutions. Consequently, our project focuses on water (e.g., soil sealing, stormwater drainage, retention, and flood protection), urban (micro)climate (e.g., heat islands and air flows), and urban vegetation (e.g., greening strategy, vegetation monitoring/vitality), which include many co-benefits such as reduced urban carbon footprint and a higher quality of life for citizens.
The low-threshold service of a cloud-based Earth Observation toolbox developed in CoKLIMAx will be capable of promoting and improving resilience in urban settings. Stakeholder decisions are made more agile by embedding improved or even new digital process structures in local governments. The focus on low-threshold access to the relevant information promotes sustainable policy decisions for the future. Finally, CoKLIMAx has the potential to evolve into a federated digital platform enabling and encouraging agile sustainable (nature-based) infrastructure planning for communities and (urban) decision-makers on national and international levels.
In the second half of the twenty-first century, a strong growth of global human population and economic activity went along with a rapid accumulation of societal material stock. Societal material stock encompasses all long-lived materials contained in buildings, infrastructure and other durable goods. Material stocks are the basis for human-living and well-being, as they provide key services such as shelter, food, mobility or health, and are, in addition to population, a key parameter of the socio-economic metabolism.
Recently, an approach for nation-wide mapping of material stock at 10 m spatial resolution, using freely available and globally consistent Earth Observation imagery (including Copernicus Sentinel-1 and Sentinel-2) has been introduced as an alternative to cost-intensive cadastral data or broad-scale but thematically limited nighttime light-based mapping (Haberl et al. 2021). However, this approach has, so far, only been used to map material stock in a recent point in time.
In this study, we assessed the potential of the Landsat archive to create spatially explicit time series data of material stock dynamics and their relation to population from 1985 to 2018 at a spatial resolution of 30 m. The study area is Germany, featuring important urban-rural and environmental gradients as well as interesting societal and economic developments as a consequence of the German reunification in 1990. We used Landsat TM/ETM+/OLI imagery with a Change-Aftereffect-Trend (CAT) analysis to derive yearly masks of land surface change from 1985 to 2018. Those served as an input to an annual reverse calculation of six material stock types and building volume-based annual gridded population, based on maps for 2018. Material stock and population in Germany grew by 13% and 4%, respectively, showing highly variable regional patterns (Figure). We found a minimum building stock of ca. 180 t/cap. across all municipalities and through time. A rapid growth of stocks per capita occurred in East Germany after 1990, with increased building activity but population decline. Densely built-up and populated agglomerations showed particularly slow growth.
We here present a computation efficient workflow to use the Landsat archive in order to quantify temporal dynamics of the socio-economic metabolism. The high-resolution, but large-area approach is regionally universal, as it uses datasets created from globally consistent satellite data, and as it also accepts alternative raster-based material stock input datasets if locally available. The CAT approach is very robust to data gaps as it uses annual peak vegetation approximated by a maximum NDVI only, which makes it promising for data sparse regions. Compared to modeling approaches on a national level commonly used in material stock estimation, it quantifies important local development. This is expected to contribute to international initiatives that monitor aspects of the socio-economic metabolism, such as the EUROSTAT economy-wide material accounting.
Helmut Haberl, Dominik Wiedenhofer, Franz Schug, David Frantz, Doris Virág, Christoph Plutzar, Karin Gruhler, Jakob Lederer, Georg Schiller, Tomer Fishman, Maud Lanau, Andreas Gattringer, Thomas Kemper, Gang Liu, Hiroki Tanikawa, Sebastian van der Linden, Patrick Hostert (2021): High-Resolution Maps of Material Stocks in Buildings and Infrastructures in Austria and Germany. Environ. Sci. Technol., 55, 5, https://doi.org/10.1021/acs.est.0c05642
Urbanization is an environmental transformation process that converts rural and natural areas into urban settlements. Accompanied by a rural-urban migration, this process has been rapidly accelerated since the second half of the last century. In 1950, 30% of the world’s population lived in urban agglomerations. This proportion has increased to 55% in 2018 (United Nations Department of Economic and Social Affairs, 2019). In the coming decades, the trend of urbanization is expected to continue, notably in small and mid-sized cities in Asia and Africa. Vietnam belongs to the fastest urbanizing countries in the Asian-Pacific region, mainly driven by a significant rural-urban migration. Besides the large urban centres like Ho Chi Minh City, Ha Noi or Da Nang, more than 50 medium-sized cities like Hue, Nha Trang or Dong Hoi play an important role in the urbanization process. These cities show diverse growth patterns and are facing significant challenges that come with the urbanization. Besides land-use conflicts, the need for rapid planning approaches in various sectors, increased development of a sustainable supply infrastructure, many cities are also facing an increase in climate-related extreme events and hazards such as flooding (coastal, fluvial and flash floods) (Nguyen et al., 2021), leading to increasing exposure of urban populations, assets and infrastructures.
Earth observation (EO)-based data has been wildly used for urban modelling for decades and have proven their potential for urban growth simulation and prediction (Donnay et al., 2014). Nowadays, EO data from satellites with shorter repeat cycles and higher resolution enable spatially explicit urban modelling with a high temporal information density. The World Settlement Footprint Evolution (WSF Evolution) (Marconcini et al., 2021), developed by German Aerospace Center (DLR) and ESA, is a comprehensive annual dataset that outlines settlements and urban agglomerations with a global coverage from 1985 to 2015. For the derivation of dataset, training data from the WSF2015 layer with 10 m resolution (based on Sentinel-1 and Landsat-8) has been used. An iterative backward classification approach based on machine learning, exploited the entire Landsat archive from 1985-2015 (Marconcini et al., 2020). The dataset describes the development of human settlements on an annual base and therefore provides enormous potential for future urban growth modelling.
In the proposed study, a national scale urban growth projection until 2050 under a business-as-usual scenario has been conducted for Vietnam. The analysis is based on the SLEUTH cellular automata model using a genetic algorithm (GA) (Clarke, 2017). The analysis is based on the WSF Evolution in full spatial resolution and conducted for the entire Vietnamese territory, using the WSF settlement information starting from 1985, road infrastructure data, topographic data from the Copernicus Digital Elevation Model (DEM) and areas that are excluded from urban development, like protected natural areas. To overcome the computing cost for such a large data volume and improve the efficiency of the SLEUTH calibration process, a tiling processing strategy based on local growing speed and k-mean clustering was applied. Results confirm the potential of the WSF Evolution for urban growth prediction. The predicted future urban growth illustrates the most dynamic development hotspots across the entire country for the upcoming decades and demonstrates different trends of urban developments in various regions. Based on the current model configuration, different growth scenarios will be developed and modelled in order to complement the business-as-usual scenario. Findings of this study can support sustainable planning and will support research activities e.g. in the Vietnamese-German research projects FloodAdaptVN (https://floodadapt.eoc.dlr.de/) and Drought-ADAPT (https://www.bmbf-client.de/en/projects/drought-adapt).
References
Clarke, K. C. (2017). Improving SLEUTH Calibration with a Genetic Algorithm: Proceedings of the 3rd International Conference on Geographical Information Systems Theory, Applications and Management, 319–326. https://doi.org/10.5220/0006381203190326
Donnay, J.-P., Barnsley, M. J., & Longley, P. A. (2014). Remote Sensing and Urban Analysis: GISDATA 9. https://www.taylorfrancis.com/books/e/9781482268119
Marconcini, M., Metz- Marconcini, A., Esch, T., & Gorelick, N. (2021). Understanding Current Trends in Global Urbanisation—The World Settlement Footprint Suite. GI_Forum, 1, 33–38. https://doi.org/10.1553/giscience2021_01_s33
Marconcini, M., Metz-Marconcini, A., Üreyen, S., Palacios-Lopez, D., Hanke, W., Bachofer, F., Zeidler, J., Esch, T., Gorelick, N., Kakarla, A., Paganini, M., & Strano, E. (2020). Outlining where humans live, the World Settlement Footprint 2015. Scientific Data, 7(1), 242. https://doi.org/10.1038/s41597-020-00580-5
Nguyen, M. T., Sebesvari, Z., Souvignet, M., Bachofer, F., Braun, A., Garschagen, M., Schinkel, U., Yang, L. E., Nguyen, L. H. K., Hochschild, V., Assmann, A., & Hagenlocher, M. (2021). Understanding and assessing flood risk in Vietnam: Current status, persisting gaps, and future directions. Journal of Flood Risk Management, 14(2). https://doi.org/10.1111/jfr3.12689
United Nations Department of Economic and Social Affairs. (2019). World Urbanization Prospects 2018: Highlights. UN. https://doi.org/10.18356/6255ead2-en
The usage of satellite data in the management of urban greenery
Maciej Jurzyk, Warsaw University of technology jurzyk.maciej@gmail.com
The dynamic development of cities, the growth of population and spatial development cause major changes in the urban morphology. The imperviousness surfaces, a large concentration of concrete, cars and the declining role of urban greenery make cities less comfortable to live. These factors have impact on negative phenomena within the urban climate, for example the intensification of the phenomenon of an Urban Surface Heat Island. The negative effects of the increase in recorded temperatures affect both the health of the population and the economy. The aim of this study was to develop an automatic methodology useful in the management of urban space with the use of data from the Copernicus programme, satellite data from the Sentinel series and the thermal Landsat-8. In this work, access to data on the CREODIAS portal was used. Key areas for reducing the intensity of the urban heat island phenomenon have been identified. Appropriate processing of spatial data allowed for the identification of areas with the highest intensity of the Urban Heat Island phenomenon with a simultaneous low amount of photosynthesis and biomass, as well as high population density. The analysis made it possible to create an indicator: Density Green Need Index (DGNI) which indicates the areas that require increasing of the share of urban green areas in order to cool the urban microclimate.
Keywords: Sentinel, Copernicus, Landsat8, NDVI, LST, Urban surface heat island, remote sensing
Humanitarian organizations need population information for efficient crisis response planning. However, obtaining up-to-date population data in countries in the Global South is difficult due to infrequent censuses and high urban growth rates. The granularity of the available population data is another major issue, as censuses typically provide coarse population data (e.g, at the level of provinces or counties) in developing countries. However, these countries are typically also the regions where humanitarian organizations operate and require fine-grained population maps for applications such as disaster response.
In order to generate fine-grained population maps (e.g., 100m x 100m resolution), available coarse census data are often spatially disaggregated, typically using building maps and assuming direct proportionality between the two [1]. Other approaches make use of several other useful features, such as night light image bands and estimations of distances to the closest road [2]. These features are aggregated for each coarse administrative region, with known population census data, and then used to train a model for performing fine-grained population mapping, therefore assuming that a model predicting at regional scale could be used as such to provide fine-grained predictions. This assumption has the disadvantage that, when the original regions used for training are very coarsely mapped, the difference in resolution in the prediction domain makes the domain shift between predictors increase, thus reducing the model's performance.
In this work, we aim to produce fine-grained population maps using multiple available features and avoiding the aforementioned problem of feature aggregation. We propose a method based on Markov Random Field (MRF) that iteratively improves the initial estimations of a dasymetric disaggregation method. During the iterations, the MRF-based method minimizes an energy function that encourages locations (e.g., regions of 100m x 100m area) with similar features to have similar population predictions while at the same time ensuring that the predictions sum up to a value close to the available regional census data.
We evaluated our proposed method in a scenario with very coarse census data available, in the country of Tanzania, following the validation approach presented by Stevens and colleagues [2]. In the task of population disaggregation, the proposed method improves the accuracy obtained by the aforementioned baselines. Given that the census data used for training is very coarse (170 administrative regions for the whole country), the method proposed by Stevens and colleagues only obtains a R-squared of 0.41 and Mean absolute error (MAE) of 5188 [2]. Direct dasymetric disaggregation using building counts obtains R-squared of 0.65 and MAE of 3713 [1]. Whereas, our proposed MRF method produces the best estimations, with a R-squared of 0.78 and MAE of 3314.
In our future work, we plan to build a method that can also estimate population solely based on features without using census data for a given region in order to generalize to other countries where census data are unavailable or inaccurate. Furthermore, we aim to use additional features extracted from social media data to improve accuracy.
[1] Huang, X., Wang, C., Li, Z., & Ning, H. (2021). A 100 m population grid in the CONUS by disaggregating census data with open-source Microsoft building footprints. Big earth data, 5(1), 112-133
[2] Stevens, F. R., Gaughan, A. E., Linard, C., & Tatem, A. J. (2015). Disaggregating census data for population mapping using random forests with remotely-sensed and ancillary data. PloS one, 10(2), 1-22.
The decarbonization of the global energy system through the transition to renewable energy sources is one of the main pillars of mitigation measures addressing climate change. A key determinant in the realization of a successful energy transition is the rapid installation of renewable energy infrastructure, which, for the most part, is of very heterogenous nature and spatially decentralized [1]. This is particularly true for roof-top photovoltaic systems (PVs), which are often small-scale – privately owned, differ widely in their capacity and are non-randomly distributed in space, with even the occasional emergence of patterns of socio-economic and political boundary conditions. In order to foster the rapid expansion of PV installations, which is needed for reaching aims and commitments with respect to the energy transition, an automatically updatable monitoring of the evolution of such PV installations over time can be a vital asset for political decision makers, both on the communal and national level, supporting the efficient allocation of resources.
Here, we demonstrate a system for nationwide mapping of rooftop PV installations based on multiple timesteps of aerial imagery for all of Germany. We demonstrate our system with an exemplary wall-to-wall analysis, where we exploited publicly available registry data as well as manually labelled samples for training data collection. Based on this curated training data set of around 350.000 samples, we trained ensembles of supervised, state-of-the-art deep neural networks (ResNets, ResNests and EfficientNets) for predicting the presence of PV installations for each building in Germany. Following a rigorous validation exercise based on over 20.000 samples, we report a very high predictive performance of 0.96% F1-score overall, with a regional variability of +/- 0.03.
Going beyond single date classifications, this system enables us to track the growth of PV installations over time throughout Germany. Add-on analyses of installed PVs in combination with spatially explicit solar potential models [2] allow us now to identify and suggest priority areas for high-return-on-investment policy stimulation for fostering the much-needed growth of decentralized PV installations.
References:
[1] M. Victoria, K. Zhu, T. Brown, G. B. Andresen, and M. Greiner, “Early decarbonisation of the european energy system pays off,” Nature Communications, vol. 11, no. 1, 2020.
[2] S. Joshi, S. Mittal, P. Holloway, P. R. Shukla, B. Ó. Gallachóir, and J. Glynn, “High resolution global spatiotemporal assessment of rooftop solar photovoltaics potential for renewable electricity generation,” Nature Communications, vol. 12, no. 1, 2021.
Air quality changes during pandemic lockdowns and pre-pandemic in Germany and the Netherlands
Authors: Lixia Chu1, Christoph Lofi1
Affiliations: 1 Faculty of Electrical Engineering, Mathematics, and Computer Science, Delft University of Technology (Tu Delft), the Netherlands
Emails: L.Chu-1@tudelft.nl; C.Lofi@tudelft.nl
The outbreak of the coronavirus disease 19 (Covid-19) has posed a worldwide threat to human beings, economic activities, and society. Enforced lockdowns for limiting the spread of Covid-19 virus also substantially reduce air pollutant emissions from vehicle traffic, industrial plants, etc. The lockdown restrictions have brought beneficial environmental implications, such as improvement of air quality. Previous studies recorded the reduction of air pollutants during the short-term lockdown in some cities and areas in Indian, China, and the U.S. [1-5]. While some studies argue that the improvement of air quality is not due to lockdown, but season influence or temporary change by coincidence[6]. Therefore, there is not enough evidence that the improvement of air quality is mainly due to reduced human activities. It is beneficial to answer this question by investigating and comparing the air pollution changes within countries with multi waves of pandemic timelines and different lockdown measures.
Our research chose Germany and the Netherlands to investigate the air pollutant changes during their multiple lockdowns. Both of the two countries have gone through several pandemic waves while their imposed strategies are different, ranging from lockdown light, partial lockdown, full lockdown, to curfew in different stages of the pandemic. Our research investigates changes in air quality during their multiple pandemic waves and compares seasonal and monthly changes with the historical records (pre-pandemic) from ground stations to analyze the anomalies. During the pandemic period, the research will compare the disparities of air quality improvement with the several pandemic waves among mega urban agglomerations within the two countries. For the pre-pandemic period, this research analyzes the anomaly in comparison with the historical records with the air quality index.
In particular, we adopt the datasets produced by a space-borne air pollution sensor TROPOMI on the Sentinel-5P satellite, provided in Google Earth Engine data catalog. We process the data and extract information about air pollution, including CO, NO2, SO2, O3, and CH4 for analyzing the air pollutant composition changes during the several pandemic waves. First, the decline values of air pollutant composition will be calculated and analyzed between the pandemic waves to prove the different changes following every wave in main urban areas within the two countries. Second, by aggregating the air pollutant concentrations from the satellite-based air pollution data into monthly, seasonal, and annual data and comparing them with corresponding historical records from ground stations at the same periods of pre-pandemic time, the anomalies will be calculated and analyzed to illustrate the improvement of air quality because of pandemic lockdowns at the country level. The historical record data will be collected from the air quality index based on the ground station measurements. Third, the disparities of air pollutant reduction during the pandemic will be also analyzed between the Netherlands and Germany, considering their different lockdown strategies.
The result will provide strong evidence on the air quality improvement due to the reduction of human activities during lockdown periods and highlight the influence of anthropogenic activities on air pollution. The resulting information will provide information to policymakers concerning emission control and sustainable urban development.
Keywords: Air quality changes, lockdowns, pre-pandemic, Google Earth Engine
Reference:
1. Parida, B.R., et al., Impact of COVID-19 induced lockdown on land surface temperature, aerosol, and urban heat in Europe and North America. Sustainable Cities and Society, 2021. 75: p. 103336.
2. Naqvi, H.R., et al., Improved air quality and associated mortalities in India under COVID-19 lockdown. Environmental Pollution, 2021. 268: p. 115691.
3. Berman, J.D. and K. Ebisu, Changes in U.S. air pollution during the COVID-19 pandemic. Science of The Total Environment, 2020. 739: p. 139864.
4. Sahani, N., S.K. Goswami, and A. Saha, The impact of COVID-19 induced lockdown on the changes of air quality and land surface temperature in Kolkata city, India. Spatial Information Research, 2021. 29(4): p. 519-534.
5. Li, L., et al., Air quality changes during the COVID-19 lockdown over the Yangtze River Delta Region: An insight into the impact of human activity pattern changes on air pollution variation. Science of The Total Environment, 2020. 732: p. 139282.
6. Etchie, T.O., et al., Season, not lockdown, improved air quality during COVID-19 State of Emergency in Nigeria. Science of The Total Environment, 2021. 768: p. 145187.
Green roofs are one element of cities to adapt to climate change. They reduce greenhouse gases and pollutants in the air layers near the ground, reduce the effect of the urban heat island and reduce surface runoff. Green roofs increase urban biodiversity or prevent habitat loss and reduce noise pollution. Moreover, green roofs cool buildings in summer and insulate them during winter. Thus, green roofs can contribute to develop resilient cities. For this reason, green roofs are a response indicator of climate change in the German Adaptation Strategy to Climate Change. It is expected that the share and area of green roofs increase in German cities with progressing climate change. The monitoring report of the German Adaptation Strategy requires nation-wide comparable and consistent data since the indicators should represent the development at national scale. There is, however, no inventory of green roofs at the national level. Even city administrations rarely know if and where green roofs exist and how large the area is. A few cities, such as the German city Dresden, commissioned a procedure for mapping green roofs with aerial imagery. The Dresden approach, however, is hardly transferable to the entire country because aerial surveys are planned on state-level and are often conducted during winter. Contrary to aerial imagery, satellite data offers country-wide and consistent data (at the cost of the high spatial resolution).
In our project, we aimed to develop an approach for mapping green roofs using spaceborne data. Furthermore, we tried to determine minimum roof sizes, which can be mapped with different spaceborne sensors. As a study area, we selected the city of Dresden/ Saxony since the city council holds a spatially explicit inventory of green roofs from the year 2017. This data set served as training and validation data. As satellite data, we used atmospherically corrected Sentinel-2 and PlanetScope data acquired during summer 2017. To distinguish vegetated and non-vegetated roofs, we calculated and compared the results of three different vegetation indices, namely Normalised Difference Vegetation Index, Soil Adjusted Vegetation Index and Transformed Soil Adjusted Vegetation Index. To separate green roofs and other vegetated areas, we included Level of Detail data (LoD3), which contain roof outlines and types, i.e., a flat or pent roof. Using PlanetScope data with 3.5 m x 3.5 m spatial resolution allowed mapping roofs larger than 70 m². Sentinel-2 with 10 m x 10 m spatial resolution at least enabled to map roofs larger than 400 m². Consequently, Sentinel-2 (balanced overall accuracy ~ 0.86) was less accurate than PlanetScope (balanced overall accuracy ~ 0.93). The results hardly differed between the three different vegetation indices. Compared to analysing aerial imagery, we can map less roofs with the used satellite sensors since we consider only roofs with a certain minimum size. According to the inventory, around 6.8 km² roofs in Dresden are flat and might be vegetated. Considering roofs larger than 70 m² (PlanetScope), we can consider only 5.3 km² of flat roofs, in case of Sentinel-2 only 2.8 km² (roofs larger than 400 m²). Within these areas of flat roofs, we were able to map 1.2 km² green roofs with PlanetScope and 0.3 km² with Sentinel-2.
Compared to aerial imagery, both satellite sensors may provide a nation-wide data basis with a relatively easy vegetation index approach. The results of PlanetScope are better comparable to the aerial based surveys, but are costly and on-demand. Sentinel-2, however, is freely available and covers larger areas regularly. Our study shows that we may not capture roofs of carports and garden sheds with these spaceborne sensors, but smaller residential houses (PlanetScope) and larger residential and commercial houses or industrial parks (Sentinel-2).
Rapid urbanization is exerting pressure on biodiversity, ecosystems, water, and public health. Therefore, continuous monitoring of urban areas is becoming ever more important. Built-up lands represent the most intensive land uses of humankind in the form of roads, cities, villages, and infrastructure. Given the heterogeneous nature of built-up lands, advanced algorithms such as deep learning neural networks, which examine spatial context, are a powerful alternative to traditional per pixel classifications. Built-up land consists of human constructed land surfaces associated with infrastructure, commercial, and residential land uses. The global annual built-up land extent and gain map was made using the GLAD ARD product which represents a 16-day global Landsat normalized surface reflectance and brightness temperature composites combined from the best quality observations derived from the Landsat 5, 7 and 8 satellites, from which we obtained annual metrics for 2000-2020.
To facilitate the deep learning application, we employed interquartile metrics from the red, near infrared and both shortwave infrared bands. Due to the contextual nature of built-up lands, particularly settlements, we employed a deep learning convolution neural network algorithm. We utilized the U-Net convolutional neural network architecture which has proven to work robustly over a variety of tasks in remote sensing. To train this algorithm, we used a rasterized version of the road’s dataset from Open Street Maps from which we randomly sampled patches with a size of 128×128 pixels (each pixel corresponding to 30 × 30 m). Weight decay and data augmentation were utilized in the form of down sampling, rotations, and flips for the gradient-based optimization of the U-Net parameters.
Final 2000 and 2020 per pixel probabilities were generated and validation data was used to assist in final threshold of layers in depicting 2000 and 2020 extents and 2000-2020 gain in built-up lands. The product validation was done using a set of 400 reference 30 m pixels (individual Landsat 30-m pixels). The samples were allocated using a stratified random design. The strata represent stable non-built-up land, stable built-up land, and low and high confidence built-up gain strata. Reference data were collected through visual sample interpretation of time-series Landsat spectral graphs and thumbnail images, with interpretation aided by high-resolution images by Google Earth. We also examined whether built-up land was found in adjacent pixels to the selected sampling unit pixel. User’s accuracy (reflecting commission error) and producer’s accuracy (reflecting omission error) were calculated for all classes and results were used to divide the low confidence stratum into stable of gain classes. The accuracy assessment was based on 400 sample points and interpreted using Google Earth and Landsat time-series imagery. The results obtained for the strata were Not built (Users 98.9 ±1.0, Producers 99.5 ±0.1), Stable built (Users 56.1 ±4.7, Producers 42.3 ±21.0), and Built gain (Users 70.3 ±5.3, Producers 54.8 ±4.4). These accuracies reflect the challenge of mapping built-up lands. Challenges include having to consistently apply the definition allocated to mixed pixels consisting of a wide variety of human-built surfaces at the global scale. The rule applied to reference imagery included the presence of any built-surface or object as belonging to the class of interest. A probability-based sample analysis is the recommended good practice approach to estimating land cover and land use extent and change. The global surface built-up land map enables a higher sampling efficiency through stratification and can be extended to report on regional and national dynamics. Current validation results inform only the global scale. Spatiotemporal Built-up data quantify not only the total Built-up extent change but are also critically needed at the global scale to help manage rapidly growing urbanization.
By 2050, 68% of the world’s population is projected to live in urban areas. During periods of extreme temperatures, urban populations are more severely affected by heat stress, with increases in hospital admissions and higher death rates. It is therefore important that weather and climate models resolve the urban surface energy balance by accurately representing the urban land surface. This requires information about the morphological structure of cities, such as building height, and the footprints of both buildings and non-building impervious areas.
While weather and climate models are still not routinely run at scales that can resolve individual buildings, a common parameterisation in urban modelling schemes is to take a two-tile approach by separately representing roof tops and 2D street-canyons. This allows the surface energy balances to be calculated separately, accounting for differences in heat storage capacity, albedo and shadowing. This approach is used in the Met Office–Reading Urban Surface Exchange Scheme (MORUSES), which forms part of the JULES land surface model, and is used in high resolution regional configurations of the Met Office Unified Model.
The CCI Medium Resolution Land Cover project has explored various earth observation approaches to mapping urban morphology. In practice, two parameters are needed to describe urban geometry: the frontal area index, which is derived from building heights and building footprints, and the planar area index, which is derived from building footprints and total impervious area. Here we present an assessment of the application of existing observational datasets for the calculation of these urban morphology parameters.
Previously, the inputs used to calculate the morphological parameters were not widely and consistently available across global cities, so empirical relationships were used to gap fill morphological data across the entire modelling domain. For example, in MORUSES, high-resolution land cover and building height data available for London was used to derive the planar and frontal area indices for the city. A derived empirical relationship based on urban fraction was derived for London and subsequently applied to all other cities. The drawback of this approach is that, for a given value of urban fraction, all cities are assumed to have the same structure (compactness, height, canyon structure) as London.
Here, we identify ways to improve on this approach by using a variety of approaches. We discuss recent advances in urban morphology mapping and persistent challenges in deriving these parameters at the pixel level for application in urban weather and climate models.
Negative effects of climate change lead to diverse and extensive impacts. While some countries and cities are more vulnerable than others to uncertain outlooks, reliable tools to assess climate risks, drive decisions and turn threats into opportunities are increasingly needed. Geospatial environmental data are globally available, covering populated as well as remote areas. The pool of data reaches back decades in time and grows day by day. Satellite data play a crucial role in improving the multi-dimensional description of the Earth system. This invaluable resource, when merged with socio-economic information and other open and free datasets, enables us to better understand dynamics of a globally changing climate and thus rapid and sound decision making. The role of cities in shaping our future is ambivalent, the way they grow and develop is of great importance for the future climate. Cities are the vanguard of a transition to a sustainable future and can help to move closer to achieving international development goals. This potential can be exploited if the wealth of data and information on cities and activities in cities are used and linked efficiently.
ClimateLynx (see the concept video here http://sistema.at/pub/lynx.mp4) is a knowledge management system for climate related data and information. A knowledge base, also called “second brain”, is a tool that supports creating relationships between data and information to help think better. In our proposed service, the knowledge we want to gather, explore and exploit is data relevant for climate change induced decision making. Our vision is to create a constantly growing and evolving climate change knowledge graph supporting decision and policy makers to contribute to the sustainable development of cities and helping us to move us closer to achieving current and future climate pledges, and eventually a more sustainable future for all. ClimateLynx includes climate data and data from interdisciplinary domains alike, such as socio-economy (e.g., WWWorld Bank - https://data.worldbank.org/, Asian Development Bank - https://www.adb.org/what-we-do/data/main) or health (e.g., World Health Organization - https://www.who.int/data/collections). The scope is to bring together these data and thus generate location and time relevant insight. This way, a holistic approach to strengthen resilience is fostered. When the data pools are fused and put into context, it is possible to generate connections and correlations between indicators of different domains. The combination and linkage of inter-domain specific indicators could help to better understand interdisciplinary climate change induced global dynamics and tail effects. Moreover, non-obvious linkages between indicators or domains could be highlighted or even uncovered. With the help of such a climate knowledge portal, it could be possible to detect negative emerging climate trends based on the time series analysis of indicators earlier and react adequately. ClimateLynx is devoted to decision makers, urban planners as well as data experts. Urban planners can take advantage of ClimateLynx through comparing initiatives and developments with other cities of e.g., similar size, similar climatic conditions, or similar GDP. This enables for efficient planning and can support ideas and initiatives to create more liveable and climate resilient cities. Likewise, data experts might be interested to explore the various data sets and create new connections through linking indicators from different domains (e.g., socio-economy, health, climate) and thus discovering location relevant specificities.
ClimateLynx is built on top of the data access and processing capabilities offered by the ADAM platform (https://adamplatform.eu), to quickly access and process large hyper-volumes of data. Through ADAM, ClimateLynx is be fed with climate indicators calculated from data from historic, currently operating, and future satellite missions. Global climate indicators are computed periodically, city-aggregated information is extracted off-line to offer optimal user experience.
Climate change is one of the biggest challenges of recent times, with economic, societal, and environmental impacts worldwide. In response to these challenges, the European Union (EU) proposed the EU Green Deal which sets a blueprint that commits on transforming the EU into the first climate neutral continent by 2050. To this end, innovative solutions for climate-change adaptation and mitigation measures must be implemented in regional and local scales. The H2020 Green Deal project IMPETUS aims to develop and validate a coherent multi-scale, multi-level, cross-sectoral adaptation framework for climate change, paving the way towards a climate-neutral and sustainable future. This will be achieved by building on resilience knowledge and by co-designing together with local communities and stakeholders, innovative packages of methodological, technical, governance and financial solutions.
To optimize the expected benefit from these solutions, minimize implementation costs and maximize societal acceptance, the Adaptation Pathways should be targeted at prioritized regions which are exposed to high-risk climatic stresses, are vulnerable to these changes and/or have lower adaptive capacities (based on installed infrastructure, policy etc.) to address challenges. These regions can be considered as climate change “hot-spots” and in this paper we report on the development of a web-based geospatial analysis service that will assist in identifying them. The service will comprise a superset of methodologies, indicators and metrics regarding climate change vulnerability, risk assessment and adaptation, city resilience and water sector smartness, building on the experience and products of several EU funded research project products, e.g. Blues Cities and BWaterSmart. The hot-spot analysis based on these metrics will use collections of high-resolution spatiotemporal data that describe key parameters such as hydrological, atmospheric, land cover, climatic etc. data from repositories of satellite imagery and ground based observations (such as the Copernicus Earth observation programme, the Sentinel satellite missions, etc.) and future projections of these data (based on stochastics and scenarios) to generate future scenarios for climatic conditions. Thus, the tool will feature a stress and vulnerability assessment framework tailored to the specific characteristics of regions, able to identify hot spots associated with different climatic futures. This work will also leverage ongoing methodological developments within the ESA funded research project extrAIM, specifically benefiting from its innovative uncertainty-aware computational framework for satellite data for hydroclimatic extremes.
It is envisaged that this uncertainty-aware, climate change “hot-spots” identification tool will be available as an EU-wide service and be used as a screening tool for policymakers, prioritizing attention towards the development of regional adaptation pathways to climate-change and as such, will contribute to the European research area’s efforts to take the EU one step closer to achieving its ambitious Green Deal objectives.
The MUltisensor based SErvices (MUSE) project, carried out by Enea in collaboration with Ingv, Hypatia Consortium, the companies Superlectric, Ylichron, G-Matics, is developing a service for multi-hazard in urban environments to assess the risk related to geo-dynamic, atmospheric and environmental issues that can cause damage to the territory and citizens. The aim of the project is to develop a complete platform of services for stakeholders whose task is to protect the territory, the environment and citizens, such as Infrastructure operators, and Municipalities. MUSE is structured in three monitoring activities: Geophysics and Geodynamics monitoring, with geophysical soil analysis in urban and extra-urban environments, Atmospheric monitoring through a UAV-mounted multispectral sensor, with analysis of the atmosphere close to the ground, with all its natural and anthropogenic components (such as pollutant gases etc.), and Environmental Area, relating to changes in the urban fabric with solid (e.g. illegal dumping) and liquid (e.g. spills etc.) parts. The individual monitoring systems include different components, in order to acquire data of a different nature, independent of each other and with different spatial and spectral resolutions, which together provide an accurate and reliable picture of the territory. The first set of components are the Remote Service Components related to the analysis of data from different satellite constellations (such as Copernicus Sentinel-1), particularly in the area of interferometric services (InSAR). The analysis of these data makes it possible to acquire low-resolution data on the ground but with a high capacity for verifying vertical movement, which is particularly useful in urban environments; Next are Proximal Service Components related to the analysis of data from drone-mounted instrumentation. These data can be either land state data (multispectral data analysis) or chemical air analysis data, through a new class of measurement systems based on spectral analysis of collected gases; Lastly, Ground Service Components related to the analysis of data from ground-based instrumentation (such as georadar). These data acquired through the use of microwave antennas (from 200Mhz to 2000Mhz) make it possible to better represent the situation of the subsoil near the road surface (up to a depth of 3-4 metres), for the analysis of problems with underground services and to highlight the presence of sinkholes. The MUSE project, which aims to develop an innovative, sustainable and low-cost solution, stands among the necessary actions towards a green transition. MUSE therefore represents a powerful tool in terms of non-invasive infrastructure monitoring and sustainable infrastructure development with a view to reducing environmental impact.
A numerical simulation with a Computational Fluid Dynamics (CFD) model in a high-rise building zone of Madrid (Spain) is performed to analyze the effects of the skyscraper buildings on urban meteorology and air pollution. High-rise buildings have a large impact on urban wind patterns. Wind is one of the most important variables affecting both pedestrian comfort conditions as well as the dispersion of outdoor urban pollution. The CFD simulation is driven by a meso-scale simulation, that provides meteorological and air quality boundary conditions. The CFD simulation is driven by a mesoscale simulation, which provides meteorological and air quality boundary conditions every 10 minutes. CFD models surpass mesoscale models in describing pollutant dispersion in urban environments, since the latter cannot capture local phenomena because of coarse grid resolution.
The CFD simulations were ran using the Parallelized Large-Eddy Simulation Model (PALM) adapted to urban areas (PALM4U) for atmospheric flows. The model PALM4U was developed by the Leibnitz Hannover University in Germany., it includes a dynamic solver for the Navier–Stokes equations and the first law of thermodynamics. It was developed as a turbulence-resolving large-eddy simulation (LES) model but to reduce computational costs we ran the PALM4U model with a Reynolds-Averaged Navier Stokes (RANS) type turbulence parameterization. RANS modeling approach is the application of the Reynolds-averaging operator to the Navier Stokes equations, resulting in the appearance of new unknowns: The Reynolds stresses. These stresses can be linked to the flow variables in different ways, which defines the type of turbulence model. With LES, a spatial filtering operator is used to separate two categories of motion scales. On the one hand, the large eddies are highly problem-dependent and are directly resolved. On the other hand, the smallest scales of motion are known to have a more universal behavior and their effect on the flow field can therefore be modeled by a so-called subgrid-scale (SGS) model. Contrary to steady RANS, the LES approach computes a time dependent solution; it is usually more demanding in terms of computational resources.
The meteorological and air quality boundary conditions of the CFD domain were obtained from results of a meteorological and air quality simulation by Weather Research and Forecasting and Chemistry model (WRF/Chem). The WRF/Chem simulation was run over three nested modeling domains with 25 km, 5 km and 1km of spatial resolution.
The computational domain has an extension of 2 km by 2 km with a horizontal and vertical spatial resolution of 10 meters. The 3D domain has a high of 500 meters, to cover the highest buildings. In the area the highest buildings are: a) KIO towers, are twin office buildings that have a height of 114 m with an inclination of 15º and b) the Four Towers Business Area that consists of four tall buildings (250 meters high) that have become the new tallest buildings in Spain. There is less than 1 km distance between a) and b), they are connected by longest and widest street of Madrid with a high traffic flow. The simulation period corresponds with the 2017 year.
The data of the 3D buildings, trees location and type (broad-leave or coniferous) are obtained from the Copernicus Land Monitoring Service with a spatial resolution of 10 meters. Buildings layer is 10m high resolution raster layer containing height. Height information is based on IRS-P5 stereo images and derived datasets like the digital surface model and the digital terrain mode. The Urban Atlas provides pan-European comparable land use and land cover data for Functional Urban Areas (FUA). The Street Tree Layer (STL) is a separate layer from the Urban Atlas LC/LU layer produced within the level 1 urban mask for each FUA. Urban Atlas is a joint initiative of the European Commission Directorate-General for Regional and Urban Policy and the Directorate-General for Defence Industry and Space (DEFIS) in the frame of the EU Copernicus programme, with the support of the European Space Agency and the European Environment Agency. Buildings are included as solid obstacles that react to the flow dynamics via form drag and friction forces. Natural and paved surfaces in urban environments are taken into account by using a multi-layer soil model. It is well-known that vegetation canopy effects on the surface--atmosphere exchange of momentum, energy and mass can be rather complex and can significantly modify the structure of the atmospheric boundary layer (ABL), particularly in its lower part. The CFD model is taking into account the aerodynamic and vegetation resistance, the emission, deposition and energy fluxes effects based on the type of trees. Urban vegetation is an important element influencing the dispersion of pollutants in the cities. In fact, the impacts of vegetation in urban areas and its application as a mitigation measure for urban air pollution is currently under discussion
We have combined the microscale traffic model Simulation of Urban Mobility (SUMO) with the EMIMO (UPM, ES) emission model, based on the detailed methodology (Tier 3) described in the EMEP/EEA Air Pollution Emission Inventory Guidebook, to provide the traffic emissions with high spatial and temporal resolution in the streets. The emission estimations are based on a very detailed fleet composition data from the Madrid municipality and vehicle flow data from the SUMO mode.
The meso-scale and CFD simulations have been evaluated by comparing the numerical simulation results against the observed data collected from available monitoring stations in the domains. The model results agree well with the measurements, the r2 value of a linear regression between model and measurement data is acceptable, CFD simulation improve the performance of the meso-scale simulation. The simulation tool reproduces the dynamics of pollutants concentrations correctly with an acceptable uncertainty level.
The very high resolution of the CFD simulation shows a detailed spatial dispersion pattern of air pollution and wind flows. The results of this applied study show the influence of high-rise building ins on the wind patterns and the dispersion process of the pollutants. The wind flow and dispersion patterns are complex, this work show that the concentration of pollutants is strongly affected by the heterogeneity of the urban 3D layout. The results showed clearly how the high-rise buildings affected the surrounding air flows and dispersion patterns, with the generation of “dead-zones” and high-concentration “hotspots” in areas. This study shows how the presence of tall buildings affects the dispersion of air pollutants within a small neighbourhood, and how concentration hotspots can be generated by the presence of the tall buildings. This simulation will help understand environmental processes as they occur in real and complex urban geometries.
TreeCop – evaluating the feasibility of Sentinel-2 data for drought stress detection in urban trees as an input for a data-driven small-scale irrigation management system in the City of Essen
Max Gerhards 1*, Henning Buddenbaum 1, Jessica Friedrichs 1, Christian Lindner 2 and Frank Knospe 2
1 Earth Observation and Climate Processes, Trier University, 54286 Trier, Germany; buddenba@uni-
trier.de, s6jafrie@uni-trier.de
2 Stadt Essen, Amt für Geoinformation, Vermessung und Kataster, Lindenallee 10 (Deutschlandhaus),
45127 Essen, Germany; frank.knospe@amt62.essen.de, christian.lindner@amt62.essen.de
* Correspondence: gerhardsm@uni-trier.de; Tel.: +49-651-201-4596
Abstract: Over the past 20 years, Central Europe has been affected by six severe summer heatwaves
with prolonged droughts. As a consequence of the climate crisis, the frequency and severity of such heat
waves will increase. Especially urban areas, where two-thirds of all Europeans are resident, are highly
vulnerable to heatwaves, which have direct negative effects on human health and well-being. Urban
green and in particular urban trees provide crucial ecosystem services (i.e., shade and cooling, biological
diversity, water- and air quality, carbon storage, local recreation, etc.). These services mitigate the
negative effects associated with climate change, and thus contribute to the quality of life in our cities
and comply with several international development goals (e.g., SDG 3 – good health and well-being,
SDG 6 – clean water, SDG 11 – sustainable cities and communities).
However, urban trees are also exposed to climate change-related increases in drought stress and tree
dieback as a result of heatwaves, leading to a reduction in their ability to provide ecosystem services.
Therefore, early detection of drought-stressed trees, in the context of a healthy urban climate and
increased climate resilience of cities, is particularly important to ensure the preservation and function of
urban greenery through targeted irrigation of stressed trees while conserving drinking water resources.
Today, several earth observation platforms are available for drought monitoring. Especially open access
and freely available Sentinel-2 imagery, with its high spatial resolution, offer great potential for drought
stress detection in cities based on several spectral indices, and thus are very attractive for municipal
applications. Nevertheless, how accurately Sentinel-2 can map the heterogeneity of urban environments
and how sensitive the available indices (e.g., MSI, NDVI, NDWI, NDRE) react to different severities
of stress is still not well understood.
During a one-year pilot project named ‘TreeCop’, the feasibility of Sentinel-2 imagery for drought stress
detection in the city of Essen has been evaluated as an input for an optimized small-scale irrigation
management system. In addition to Sentinel-2 satellite data, airborne hyperspectral data (HySpex) and
very high-resolution Planet satellite imagery have been used to record, classify and evaluate the
condition of urban trees. Furthermore, over 40 in-situ sensors distributed over the city of Essen provide
a unique validation basis with valuable data on the soil condition, especially about the water balance of
the trees.
In conclusion, the ‘TreeCop’ project developed a very promising prototype that provides information
on the condition and irrigation needs of urban trees based on Sentinel-2 data, Planet imagery,
hyperspectral airborne data, and in-situ sensors. However, various difficulties have been identified that
need to be addressed and solved within the framework of future projects. For example, these problems
include the so-called 'time-lag effect' (temporal offset between cause and effect) and the integration of
meteorological data to better describe how urban trees adapt to their heterogeneous environment.
Keywords: climate crisis, drought stress, urban trees, smart cities, human well-being, Sentinel-2, Planet
Growing metropolitan areas have potential to affect the climate of local neighborhoods and thus become a hot topic in regional planning. The study is a contribution to the climate change related land cover simulation efforts in Germany. It investigates future land consumption rates and population growth rates keeping goal 11 of the United Nation’s sustainable development goals (SDG) in the view. Secondly, the results are embedded in the sociological field of environmental justice, also dealing with SDG 3 and 11 by analyzing the socio-economic statuses in certain areas of distinct conditions. It analyzes the spatial impact of planning policies in regard to land use planning and official climate change prevention strategies in Western Germany. Scenario-based urban growth simulation model (SUSM) is used to simulate the future land use and cover of 2030 based on land use and cover maps of the years 1985, 2005, 2010, and 2017 derived by classified Landsat data. Two scenarios namely planned and unplanned were implemented to assess future land consumption 2030; the impacts of future urban growth with the projection of land consumption rate (LCR), population growth rate (PGR), and LCRPGR index on municipality level; and the impact on regions vulnerable to climate change. The comparison of simulated urban growth to observed urban growth from 2005-2017 shows that the producer accuracy of SUSM for the historic scenario is 68% with an overall accuracy of 97%, a Matthews correlation coefficient of 0.66, a figure of merit of 0.51 and area under curve of 0.84, all of which indicating good model performance. The total quantity of new urban areas of our SUSM simulation 2030 was approximately 283 km². Our results show that LCRPGR is negative in most municipalities reflecting opposing trends of population and land consumption development. Using Landsat data, the average summer land surface temperatures (LST) were estimated and trends for 1985-2020 calculated. Citizens depicted areas within zones with thermal compensation function in the city of Bonn. All areas have experienced an increase of summer LST of 3-5°C. About 33% of new urban areas in our region of interest can be found in these zones in the planning scenario and about 26% in the scenario without planning information in SUSM model. In addition, the study reveals socio-demographic patterns in context with past LST developments and future urban densification and sprawl processes. The presentation will show how socio-economic clusters are detected across spatials scales based on qualitative indications. Those findings may help to categorize certain city districts in terms of their residential composition and plan accordingly.
Urban areas are particularly vulnerable towards multiple environmental risks due to high concentrations of people, infrastructure, and stressors. Remote Sensing has the potential to contribute to their management by provision of detailed, timely, and consistent information on urban areas. This information can contribute to solving local to global scale problems, including disaster risk reduction, air quality, as well as climate change adaption and mitigation, all of which have particular relevance or are concentrated in urban areas.
However, this potential is currently not fully exploited. According to a recent review from Zhu et al. (2019) Urban Remote Sensing limits itself to case studies and methodological advancements, lacking comprehensive perspectives and diversity in terms of city size and geographic location. Moreover, they suggest strategic directions, namely larger temporal frequency and extend, more details on urban heterogeneity (beyond the urban mask) as well as form and structure (the surface, and the vertical dimension), and finally better linkage with ecological, social, and economic processes and data.
We argue that Local Climate Zones (LCZ) are suitable means to progress in several of these directions. LCZ are a generic description of urban (and rural) landscapes at local scales, which mean several 100s of m to kms (Steward and Oke 2012). They emerge from urban climatology and originally were created to describe urban observational sites, yet they have proven useful in various contexts as physically-based simple and generic description of urban structures and their heterogeneity (Bechtel et al. 2015, Ching et al. 2018). Moreover, they come with a set of physical parameters, which can be used in models as a first order approximation of physical urban characteristics but can also be linked with the metabolism of cities and thus their social and economic properties.
LCZ mapping has become very popular both in the Urban Climate and Remote Sensing and Image Analysis (e.g. Yokoya et al. 2018) communities. In this contribution we present latest developments in the field, including large scale maps (e.g. Demuzere et al. 2019) and a LCZ classification web application, called the LCZ generator (Demuzere et al. 2021), which to date was used to produce about 500 city LCZ maps.
Besides these ongoing mapping efforts, we present existing and potential applications of LCZ maps in urban and global climatology and beyond. As one highlight it is shown, how LCZs can be used to analyse the impact of urban structures on different types of Urban Heat Islands globally, which is part of the research conducted in the DFG funded project ENLIGHT (ENablig the anaLysIs of Global urban HeaT). Such detailed knowledge of both the spatial variation and the daily and seasonal dynamics of the urban thermal effect can guide interventions and planning, and thus increase urban resilience.
References
Bechtel, B., Alexander, P. J., Böhner, J., Ching, J., Conrad, O., Feddema, J., et al. (2015). Mapping Local Climate Zones for a Worldwide Database of the Form and Function of Cities. ISPRS International Journal of Geo-Information, 4(1), 199–219. https://doi.org/10.3390/ijgi4010199
Ching, J., Mills, G., Bechtel, B., See, L., Feddema, J., Wang, X., et al. (2018). WUDAPT: An Urban Weather, Climate, and Environmental Modeling Infrastructure for the Anthropocene. Bulletin of the American Meteorological Society, 99(9), 1907–1924. https://doi.org/10.1175/BAMS-D-16-0236.1
Demuzere, M., Kittner, J., & Bechtel, B. (2021). LCZ Generator: A Web Application to Create Local Climate Zone Maps. Frontiers in Environmental Science, 9. https://doi.org/10.3389/fenvs.2021.637455
Demuzere M, Bechtel B, Middel A, Mills G. Mapping Europe into local climate zones. PLoS One. 2019;14(4):1-2. doi:10.1371/journal.pone.0214474
Stewart, I. D., & Oke, T. R. (2012). Local Climate Zones for Urban Temperature Studies. Bulletin of the American Meteorological Society, 93(12), 1879–1900. https://doi.org/10.1175/BAMS-D-11-00019.1
Yokoya, N., Ghamisi, P., Xia, J., Sukhanov, S., Heremans, R., Tankoyeu, I., et al. (2018). Open Data for Global Multimodal Land Use Classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(5), 1363–1377. https://doi.org/10.1109/JSTARS.2018.2799698
Zhu, Z., Zhou, Y., Seto, K. C., Stokes, E. C., Deng, C., Pickett, S. T. A., & Taubenböck, H. (2019). Understanding an urbanizing planet: Strategic directions for remote sensing. Remote Sensing of Environment, 228, 164–182. https://doi.org/10.1016/j.rse.2019.04.020
As growing urbanisation is challenging the way we live and interact with the natural
environment, in 2018 Eurisy launched the “Space4Cities” initiative to stimulate the use of
satellite applications to shape our cities in a healthier, cleaner, safer, and more efficient way.
The key strength of the initiative is the vast exchange of expertise and know-how among city
managers, SMEs as well as research centres, all convened on a mutual goal: raising awareness
on the satellite applications that have already proved their added value to improve city
management and resilience. Additionally, the goal is to identify those challenges that prevent
cities to fully profit from the potential benefits of satellite-based applications. After three
years of implementation, Eurisy can provide evidence-based recommendations on what works
and on the challenges faced by cities, including ways to overcome them. Such feedback is
intended to reach service providers, space agencies and policymakers, who take advantage of
them to best facilitate the use of satellite data in the management of cities.
Satellite imageries are already employed in cities. Just to mention few examples, satellite data
are highly reliable when it comes to identifying urban heat islands, making predictions about
the impact of different traffic scenarios, balancing green and built areas or simply monitoring
air quality. Additionally, satellite navigation offers crucial data in providing real-time
information to enhance public transports or guide persons with disabilities in their daily
movements. Furthermore, satellite communications can help connecting rescue teams when
other connections are down and support the performance of health checks in public spaces.
Despite several satellite-based services have already proved their added-value in fostering
cities' efficiency, resilience and sustainability, there is still a lack of awareness among public
administrations about the existence of such services, and especially the use of satellite Earth
observation data. This highlights the need to better communicate about the existence of
satellite applications and their potential uses, not only to audiences with an interest towards
ICTs, but also to the general public and to local administrations, notably avoiding technical
jargon. Having this in mind, Eurisy created a database of success stories and good practices to
demonstrate how digital products and services that rely on satellite data, can be used by cities
in the smart management of the urban space.
Given the general understanding of satellite data as being just “innovative” rather than
“practice”, Eurisy finds that the key to turn innovation into operation is to focus on real needs.
This means that, on the one hand, service providers need to learn about the priorities and
needs of city departments. On the other hand, public administrations need to understand
what parameters satellites can monitor, at what resolution and how often, and need to be
aware of the time and resources they are expected to invest.
To enhance such communications, Eurisy has also produced videos on the use of Copernicus
data by local and regional administrations, that would find an ideal audience at the Living
Planet Symposium.
Mitigation and adaptation actions that enhance the resilience of cities need to be based on a sound understanding and quantification of the drivers of urban transformation and settlement structures, human and urban vulnerability, and of local and global climate change. A major challenge for the Earth Observation (EO) community is the innovative exploitation of the Copernicus products in dealing with urban sustainability towards increasing urban resilience. Due to the multidimensional nature of urban resilience, to meet this challenge, information from more than one Copernicus Core Services, namely the Land Monitoring Service (CLMS), the Atmosphere Monitoring Service (CAMS), the Climate Change Service (C3S) and the Emergency Management Service (EMS), is necessary. Furthermore, to address urban resilience, spatially disaggregated environmental information at local (neighbourhood) scale is needed. Such information, is not yet directly available from the Copernicus Core Services, while several elements - data and products - from contemporary satellite missions consist valuable tools for retrieving urban environmental parameters at local scale.
The H2020-Space project CURE (Copernicus for Urban Resilience in Europe) is a joint effort of 10 partners from 9 countries that synergistically exploits the above Copernicus Core Services to develop cross-cutting applications for urban resilience for climate change adaptation/mitigation, energy and economy, as well as healthy cities and social environments, at several European cities. CURE applications cope with the required scale and granularity by integrating or exploiting third-party data, in-situ observations and modelling. CURE uses DIAS (Data and Information Access Services) to develop a system capable of supporting operational applications and downstream services across Europe. The CURE system hosts the cross-cutting applications, enabling its incorporation into operational services in the future.
CURE is developed around two pillars: the first is related to the proof of concept of Copernicus cross-cutting applications by developing and benchmarking related to the different dimensions of urban sustainability (climate change mitigation and adaptation, healthy cities and social environments, energy and economy), based on Copernicus Core Services products (CLMS, CAMS, C3S and EMS); the second concerns the CURE system development, using methods and sample data from each cross-cutting application, as well as the evaluation of its urban resilience potential and its economic feasibility. Use case scenarios are developed to specify user requirements and therefore data requirements for cross-cutting applications development, as well as a roadmap for potential integration of CURE to Copernicus architecture.
In the first phase of its implementation, CURE has analysed and documented the specific users’ requirements, after extensive dialogue with potential users from city planning offices. The analysis of the requirements indicates varying level of interest in the different CURE applications and stresses a few common denominators to follow in order to increase the replicability aspects of the CURE applications across other European cities. From CURE perspective it is important to find a right balance between harmonised solutions across various cities and more tailor made solution to fill specific needs i.e. involving use of data from cities and development of specific products for specific needs. There is always space for the developments of more effective solutions combining EO data with in-situ data e.g., socio-economic, mobility etc. However, the challenge is to identify common features so that generic solutions can be developed for wider market replicability. Furthermore the design of these solutions should be extendable and let researchers, downstream applications developers and cities extend and build innovative solutions which can further fulfil local policy and planning needs. CURE city specific requirements identified above provided the basis for development of the technological concept, providing experimental proof, validation and demonstration of the cross-cutting applications.
Moreover, in the first phase of CURE implementation, interfaces to currently available and future Copernicus portfolio of Core Services were developed in order to standardize and streamline required inputs into the cross-cutting applications both for development and replication phases [1]. The CURE Copernicus Core Service Interface (CCCSI) facilitates the access to primary data for the urban applications in line with the users’ requirements and provides functionalities for the CURE individual applications and the CURE System to interact with relevant data for each developed service and test area. Existing Copernicus data and services repositories (e.g., Copernicus Climate Data Store) are integrated as well resources available on DIAS platform, provided by commercial actors or local city OpenData. Data packages are provided as mirrored/meta-database collections subject to a particular application provision strategy. OGC OpenSearch (including EO extension) is used for standard service facilitating the aggregation of results between disparate data providers and collections. Further extension into semantic based search using urban resilience ontology as dedicated transversal service for service providers is considered.
Furthermore, in the first phase of the project implementation EO-based methods were adapted to estimate the urban parameters in appropriate spatial and temporal resolutions for urban planning processes, using Core Services products, specifying and analysing the associated uncertainties [2]–[4], and evaluating them using in-situ observations. As for example, the LST downscaling approach for reaching local scale observations developed in the UBANFLUXES project [5] was parameterized with input from the Copernicus Core Service to achieve the dynamic monitoring of the urban surface cover, using various products from CLMS, and the state of the atmosphere using products from C3S (as indicated in Fig. 3). The method is currently validated with data collected in-situ for the case study of Heraklion, Greece
CURE is expected to increase the value of Copernicus Core Services for future emerging applications in the domain of urban resilience, exploiting also the improved data quality, coverage and revisit times of the future satellite missions. Thus, CURE will lead to more efficient routine urban planning activities with obvious socioeconomic impact, as well as to more efficient resilience planning activities related to climate change mitigation and adaptation, resulting in improved thermal comfort and air quality, as well as in enhanced energy efficiency. Specific CURE outcomes could be integrated into the operational Copernicus service portfolio. The added value and benefit expected to emerge from CURE is related to transformed urban governance and quality of life, because it is expected to provide improved and integrated information to city administrators, hence effectively supporting strategies for resilience planning at local and city scales, towards the implementation of the Sustainable Development Goals and the New Urban Agenda for Europe.
ACKNOWLEGMENTS
This work was supported by European Union Horizon 2020 research and innovation program (grant agreement 870337, project CURE).
REFERENCES
[1] M. Med and P. Křemen, “Context-based ontology for urban data integration,” in Proceedings of the 19th International Conference on Information Integration and Web-based Applications & Services - iiWAS ’17, 2017, pp. 457–461.
[2] Z. Mitraka, G. Doxani, F. Del Frate, and N. Chrysoulakis, “Uncertainty Estimation of Local-Scale Land Surface Temperature Products Over Urban Areas Using Monte Carlo Simulations,” IEEE Geosci. Remote Sens. Lett., pp. 1–5, 2016.
[3] B. Crawford, C. S. B. Grimmond, H. C. Ward, W. Morrison, and S. Kotthaus, “Spatial and temporal patterns of surface–atmosphere energy exchange in a dense urban environment using scintillometry,” Q. J. R. Meteorol. Soc., vol. 143, pp. 817–833, 2017.
[4] B. Crawford, S. B. Grimmond, A. Gabey, M. Marconcini, H. C. Ward, and C. W. Kent, “Variability of urban surface temperatures and implications for aerodynamic energy exchange in unstable conditions,” Q. J. R. Meteorol. Soc., vol. 144, pp. 1719–1741, 2018.
[5] N. Chrysoulakis et al., “Urban energy exchanges monitoring from space,” Sci. Rep., vol. 8, no. 1, p. 11498, Dec. 2018.
[6] C. Kuenzer and S. Dech, “Thermal Infrared Remote Sensing: Sensors, Methods, Applications.” Elsevier, 2015.
Synthetic Aperture Radar (SAR) represents a powerful tool for high resolution, all-weather, night & day observation of the Earth surface. It is therefore not surprising that, in last decades, a large number of SAR systems, mounted onboard of space-borne, aerial or ground-based platforms, and exploiting a wide range of frequencies (mostly from X- to P-band) of the microwave spectrum, have been developed. Moreover, a number of techniques based on SAR data have been also developed in order to monitor the natural and anthropic phenomena that may occur on Earth [1]. Among the techniques, SAR interferometry (InSAR) and Differential InSAR (DInSAR) represent well established tools permitting to generate Digital Elevation Models or ground deformation maps and time series (associated, for instance, to earthquakes, volcanic events, landslides, urban subsidence phenomena, etc. [2]-[7]). In addition, SAR polarimetry (PolSAR) is widely used to retrieve information on the physical (e.g., soil moisture) and geometrical (e.g., soil roughness) parameters characterizing the targets located in the observed scenes [8].
In this work, we show the imaging capabilities of the Italian newborn Multiband Interferometric and Polarimetric SAR (MIPS) airborne system, which is based on the Frequency Modulated Continuous Wave (FMCW) technology, and is able to operate at X- and L- band.
In particular, the X-band sensor exploits a three-antenna (one transmitter and two receivers) radio frequency layer, providing single-pass interferometric SAR capabilities. The L-band sensor, instead, is equipped with four antennas (two transmitters and two receivers), in order to acquire SAR data in a full-polarization mode.
The overall SAR system is conceived to represent a versatile tool for monitoring the Italian territory, with special regard to the ground deformation phenomena. Moreover, it is also intended to further assess and cross-validate the capabilities of L-band sensors, in view of the existing (e.g. SAOCOM 1-A and SAOCOM 1-B, [9]) and forthcoming (e.g. ROSE-L, [10], and NISAR, [11]) space-borne SAR missions operating within this portion of the electromagnetic spectrum.
A complete description of the MIPS system, as well as an extensive presentation of the imaging results relevant to the first collected datasets, will be presented, with a special emphasis given to the analysis of urban areas, carried out within the ASI-funded DInSAR-3M project.
References
[1] Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, K. P. Papathanassiou, “A tutorial on Synthetic Aperture Radar”, IEEE Geoscience and Remote Sensing Magazine, pp. 6-43, March 2013.
[2] Bamler, R., Hartl, P., 1998. Synthetic Aperture Radar Interferometry. Inverse problems, 14(4), R1.
[3] P. Berardino, G. Fornaro, R. Lanari and E. Sansosti, “A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms”, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002.
[4] R. Lanari, M. Bonano, F. Casu, C. De Luca, M. Manunta, M. Manzo, G. Onorato, I. Zinno, “Automatic Generation of Sentinel-1 Continental Scale DInSAR Deformation Time Series through an Extended P-SBAS Processing Pipeline in a Cloud Computing Environment”, Remote Sensing, 2020, 12, 2961.
[5] S. Perna, G. Alberti, P. Berardino, L. Bruzzone. D. Califano, I. Catapano, L. Ciofaniello, E. Donini, C. Esposito, C. Facchinetti, R. Formaro, G. Gennarelli, C. Gerekos, R. Lanari, F. Longo, G. Ludeno, M. Mariotti d’Alessandro, A. Natale, C. Noviello, G. Palmese. C. Papa, G. Pica, F. Rocca, G. Salzillo, F. Soldovieri, S. Tebaldini, S. Thakur, “The ASI Integrated Sounder-SAR System Operating in the UHF-VHF Bands: First Results of the 2018 Helicopter-Borne Morocco Desert Campaign”, Remote Sensing, 2019, 11(16), 1845.
[6] C. Esposito, A. Natale, G. Palmese, P. Berardino, R. Lanari, S. Perna, “On the Capabilities of the Italian Airborne FMCW AXIS InSAR System”, Remote Sens. 2020, 12, 539.
[7] Tarchi, D., Casagli, N., Fanti, R., Leva, D., Luzi, G., Pasuto, A., Pieraccini, M., Silvano, S., 2003. Landslide monitoring by using ground-based SAR interferometry: An example of application to the Tessina landslide in Italy. Eng. Geol., 68, 15-30.
[8] Lee, J., Pottier, E., 2009. Polarimetric Radar Imaging: From Basics to Applications. CRC Press, New York.
[9] https://saocom.veng.com.ar/en/
[10] https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Contract_signed_for_new_Copernicus_ROSE-L_mission
[11] https://nisar.jpl.nasa.gov/
The urban landscapes are rapidly changing due to the increasing tendency to move from rural areas to cities. This constant change of the urban scenario raises the need for precise, accurate, and up-to-date Land Cover (LC) maps. Urban footprint masks can be particularly relevant in small-medium-sized cities where the information on building presence and/or density is often totally missing.
Moreover, such maps need to be easy to update, say every few months. This condition is particularly true for developing countries, where more and more people move from rural to urban areas to escape poverty.
We argue that a fast, simple, and reliable urban footprint classification method that relies on easy-to-use, free and open-source data is still missing from the literature.
In this work, we developed a fast and easy software pipeline that exploits both Sentinel-1 (SAR) and Sentinel-2 (optical) data to generate urban LC, which can be regularly updated.
The method relies on a single Multi-Spectral (MS) image from Sentinel-2: this image derives clusters of similar pixels using a simple super-pixel segmentation.
Notice that the optical image is just used for the segmentation, not for the classification itself. In fact, a different condition in illumination (sun azimuth and elevation over the horizon) can complicate the classification using optical data.
Instead, each pixel is classified as urban/non-urban using a small stack (< 2 months) of Sentinel-1 SAR images.
The classification exploits the unique backscatter signature coming from artificial targets. A short-time revisit SAR like Sentinel-1 is ideal for this purpose thanks to its day and night operability, constant revisit time and illumination, and capability to work with every weather condition.
With these conditions, it is relatively straightforward to define a robust classifier that exploits amplitude, coherence, and polarimetry of the backscattered signal.
In this work, we also propose using differential entropy to characterize a target's temporal behavior. Such parameter is easy to compute from the multitemporal coherence matrices and avoids complicate model fitting of the temporal coherence level.
The usage of Sentinel-1 also allows for the generation of very wide (possibly even national-wide) LC maps thanks to its wide imaged areas (in the order of 250 km for Interferometric Wide swath mode).
Moreover, both ascending and descending datasets can be used to overcome intrinsic distortions of radar measurements.
All the SAR-derived features from both ascending and descending tracks are given in input to a Machine Learning (ML) model trained to separate urban areas from the background, such as open fields, gardens, forests, etc.
The entire pipeline is compared with a state-of-the-art urban footprint extraction procedure, which requires long sensing periods. The method is validated in two test sites, both in Portugal with different conditions of LC and topography. The rate of agreement is generally very high, with an average accuracy of over 90%.
This summer, Wallonia has witnessed an important flood event with high level of casualties and damages. During this event, various public service partners have worked in close collaboration to provide relevant maps such as the extent of the flooded area and the existing damages in these areas to the decision makers. The mapping team made use of various data such as earth observation (satellite and aerial images) but also few topographical measures and numerous ancillary field information collected through a large survey to map the flooded area (Figure). Flood inundation mapping helps to better understand its impact on dynamic coupled human and natural systems (Jung et al. 2011). Flood height and damages have a close relationship (Xiao et al. 2012).
The activation of Emergency Management Service Copernicus has been done by the Public Service on the 14/07. Aerial data were acquired by several vectors (plane, helicopter) on this day and the following days of the disaster. After a large coordination, teams have gathered data on the field to calibrate and create a model of the spatial extent of the flood in the more devasted cities. This operational team counted more than 150 persons and more than 30 000points collected. Ground-sourced data has been integrated with aerial orthophotos and lidar digital elevation model in different mathematical scenarii to model the flood height and map the 3D infrastructures affected by the water inundation. The comparison of these maps with the potential area at risk is needed to fill in some holes but also for future decision making.
The frequency of flood events is globally increasing and may be attributed to several factors such as changing climate and anthropogenic activities (Rather 2021). An effective regional resilience should tackle these recurring events and take decisions on adaptation and mitigation based on facts coming from previous experience. Since the experience occurred in summer 2021, the services are still evaluating the tools, data and information used to be able to improve the reaction in potential future events. While synthetic-aperture radar (SAR) Earth Observation data provided a rapid map of the potential flood areas, they have shortcomings in valleys and urban, areas (Landuyt et al. 2021). Unfortunately, the regions where the damages are higher are the cities with high population density along the river.
References:
Jung H. C., et al., Analysis of the relationship between flooding area and water height in the Logone floodplain, Physics and Chemistry of the Earth, Parts A/B/C, Vol 36, Issues 7–8, 2011, P 232-240,https://doi.org/10.1016/j.pce.2011.01.010.
Xiao, Shi Yun, et al. “Effects of Water Height and Floodgate Distance on Flow Pressure of Flood.” Advanced Materials Research, vol. 433–440, Trans Tech Publications, Ltd., Jan. 2012, pp. 6195–6204. Crossref, doi:10.4028/www.scientific.net/amr.433-440.6195
Rather, N. A, Flood water Harvesting, chapter in Flood Handbook, 2021. https://www.routledge.com/Flood-Handbook-Principles-and-Applications/Eslamian-Eslamian/p/book/9781138584938
Landuyt L., F. M. B. Van Coillie, B. Vogels, J. Dewelde and N. E. C. Verhoest, "Towards Operational Flood Monitoring in Flanders Using Sentinel-1," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 11004-11018, 2021, doi: 10.1109/JSTARS.2021.3121992.
Ground settlement and associated deformation of existing infrastructure is a major risk in urban development projects. The construction of deep excavations in soft soils can affect neighbouring structures. During construction works, excavation-induced displacements are generally monitored using precise levelling and total stations. During the past years, there has been growing interest to use Interferometric Synthetic Aperture Radar (InSAR) to quantify urban ground displacements. While InSAR can well determine the area and magnitude of subsidence, very little attention has been paid to explore InSAR measurements to assess the building response adjacent to geotechnical works (e.g., deep excavations, tunnelling). Specifically, few studies have investigated the ability of using InSAR to derive widely accepted building deformation parameters including angular distortion and horizontal strain. On the other hand, geotechnical engineers generally use traditional monitoring data to assess the behaviour of existing structures adjacent to excavations, but often only spatially and temporally limited monitoring data exist to reliably evaluate the governing mechanisms.
This contribution follows a case study approach to explore the value of InSAR to quantify the effects of excavation-induced settlements on buildings. Conventional monitoring data of structures adjacent to excavation works in Oslo, Norway, are compared to InSAR measurements from satellites with different spatial resolutions and different processing methods. Differences between displacements obtained from these different InSAR monitoring datasets and their practical implementation are discussed. Building deformations and respective damage categories are then quantified using a methodology that combines both InSAR and inclinometer measurements. As expected, it was found that high resolution InSAR measurements are more reliable when assessing the building response to excavation works.
Our results show that the interpretation of excavation-induced building displacements based on solely InSAR still needs to be carried out with extreme caution. This is particularly true for low resolution InSAR data. The study further shows that considering theoretical displacement profiles caused by deep excavations in combination with measured wall displacements can result in building response assessments that are in line with precise levelling data. Specifically, building damage categories based on TerraSAR-X SBAS measurements were identical with the ones from ground-based monitoring. Our findings suggest that the quality of the InSAR measurements can be refined by accounting for phase unwrapping and by carrying out decomposition into vertical and horizontal displacement components.
In 2020, 56% of the global population lived in urban areas and in Europe this figure is as high as 75%, with a steadily increasing trend (The World Bank). According to the European Environmental Agency’s (EEA) Air Quality Report, 20,600 premature deaths in Europe in 2018 were attributable to air pollution, with the highest proportion of people exposed to levels above the World Health Organization’s Air Quality Guidelines (WHO_AQG) living in cities (European Environmental Agency 2020). This is just one of multiple, mutually interacting phenomena constituting the complex and constantly developing urban ecosystem. To develop a better understanding of the related processes and interactions, enhanced tools are needed to enable planners and decision makers to effectively reduce the exposure of the urban population to air pollution.
The causal relationship between air pollution and the exacerbation of health outcomes, ranging from acute respiratory and cardiovascular diseases to chronic illnesses, has been comprehensively documented (Beelen et al. 2014, Di et al. 2017, Liu et al. 2021). Subsequently, a quantification of the vulnerability to health impairments of different target groups in the form of concentration-response functions has been provided by key health authorities, most recently by the WHO-AQG, published in 2021.
Vulnerability alone is not sufficient for a comprehensive quantification of risk. In fact, an accurate risk assessment must include three essential components: the assessment of the hazard (i.e., the air pollution concentration), the vulnerability of the individual (i.e., the dose-response relationship), and the probability of exposure (United Nation Office for Disaster Risk Reduction 2017). This task can be easily accomplished at the individual level using wearables, microsensors, or mobile data (Zou et al. 2009, Dewulf et al. 2016). Despite the valuable results of studies implementing these systems, they can only provide a temporally and spatially limited snapshot of the complex reality.
Hence, the challenge is to fill this data and monitoring gap. To do so, a method is necessary, that allows for an exposure estimation at local scale and, consequently, a quantification of the health risk from air pollution that is reliable, easy scalable and applicable to different areas of interest, possibly worldwide. In our study, we propose a top-down approach, exploiting the wealth of environmental data from remote sensing, in-situ measurements and modelling to assess the increase in health risk due to air pollution exposure in the urban environment. The Copernicus Atmospheric Monitoring Service (CAMS) multi-year reanalysis data (Marécal et al. 2015) is used to assess the urban air quality. The exposure estimation is obtained by exploiting the recent World Settlement Footprint (WSF) 2019, a global settlement extent mask derived at 10m spatial resolution by jointly exploiting multitemporal Sentinel-1 and Sentinel-2 imagery (Marconcini et al. 2021). The percent settlement area derived at 100m resolution from the WSF2019 is used as a proxy for the probability of human presence in the different city compartments during specific time frames. The results are risk maps, revealing the increase in health risk due to air pollution in the different city compartments and its evolution over the years.
Quantifying the total load is also a fundamental metric for policy makers. To this purpose, the novel WSF2019 population layer is employed, which estimates for each 10m resolution pixel marked as settlement in the WSF2019 the corresponding number of inhabitants. Specifically, this is obtained by proportionally redistributing population figures available at the finest possible administrative level by means of the local imperviousness (i.e., a reliable proxy for the built-up density), as well as, land-use information gathered from OpenStreetMap. In this way, an additional metric – the health burden index – that considers the number of people affected by different levels of risk can be calculated.
The approach presented here aims at providing policy makers with high quality information to support the sustainable urban development. The final goal is to increase the resilience of urban areas against the negative effects of air pollution. Quality information result from a trade-off between a large amount of input data and the accuracy of the metric provided, with the scope of making this method easily applicable to different geographical contexts. Our work can potentially support the research on the causality relationships between air pollution and health outcomes by exploiting an ecological design. Future studies should focus on validating the results using collected health data.
People in lower income countries are six times more likely to be affected by disasters compared to those in higher income countries [1]. Quantifying the spatio-temporal distribution of buildings is vital when considering exposure and modeling the impact of disaster scenarios. However, important factors such as building density, age, and height are often unavailable or outdated in city-wide databases. Similarly, existing built-up area classifications assign a timestamp to the urbanisation date of a particular pixel or area. Additionally, redevelopment within these areas is not captured, yet is prevalent as cities develop vertically. Earth Observation satellite data offers capabilities to semi-automatically quantify horizontal and vertical city expansion, with significant spatial coverage and cost-saving advantages compared to airborne light detection and ranging (LiDAR) or ground-based surveys.
We use satellite data from 1979–2021 to quantify spatio-temporal changes in built-up area and building heights for Bishkek, Kyrgyzstan, which experiences high seismic risk and lacks up-to-date exposure data. We develop an object-based classification methodology to automatically extract built-up areas from a 1979 KH-9 Hexagon satellite image. Subsequent redevelopment within built-up areas since 1979 was quantified through histogram matching and differencing using a 2021 Sentinel-2 satellite image. We then apply a deep learning framework to derive individual building footprints from high-resolution imagery including Pleiades and WorldView-2. Finally, tri-stereo Pleiades and bi-stereo WorldView-2 imagery are used to produce 1.5 m resolution digital elevation models and extract individual building heights, which are validated using ICESat-2 altimetry data.
We evaluate and accuracy assess our built-up area mapping alongside pre-existing products including the World Settlement Footprint (WSF) and Global Human Settlement Layer (GHSL). We find both commonality and notable differences between methodological approaches, highlighting the benefit of site-specific accuracy assessments. Our building extraction workflow achieved accuracy scores (F1 = ~0.7) comparable to other studies and extracted over 400,000 buildings in and around Bishkek. We evaluate a dataset of building heights derived from 1.5 m resolution DEMs that are validated using ICESAT-2 altimetry data. Our analysis reveals a notable increase in buildings over the last decade that requires consideration in loss and damage calculations when considering future earthquakes on the region’s active faults. We identify areas of the city that have undergone redevelopment and urban greening, which presents an opportunity for targeted update of building inventories rather than repeating city-wide mapping. Our methods using optical and altimetric earth observation data could be applied to other cities lacking up-to-date exposure datasets.
References:
1. Wallemacq, P.; Unisdr; Cred. Economic Losses, Poverty and Disasters 1998-2017; 2018; 10.13140/RG.2.2.35610.08643.
Informal or unplanned urbanisation often occurs in hazardous areas, which increases the socioeconomic inequalities of disaster risk [1]. Populations are becoming increasingly concentrated in urban areas; therefore, risk-informed planning is required to reduce disaster risk, which is increasingly exacerbated by climate breakdown [2]. Nature-based solutions (NbS) offer benefits to a wide range of societal and environmental issues and exist within a framework of Ecosystem-based Disaster Risk Reduction (Eco-DRR) [3]. The provision of greenspace within a city is a key element of NbS and can be quantified and monitored with increasing spatio-temporal resolution using earth observation data. Greenspace can function to both mitigate hazards, for example through flood water attenuation and slope stabilisation, and also serve as an emergency refuge safe space during spatially extensive and damaging disasters such as earthquakes.
We use satellite data and projections of future urban expansion for the city of Quito, capital of Ecuador (population over 2 million) to quantify intersections between the city and hazards from earthquakes, volcanoes, landslides, and floods. We first analyse the historical expansion of the city and changing geohazard intersections. We then use high resolution Pleiades stereo satellite data to produce a 2 m resolution digital elevation model and 0.5 m resolution pansharpened multi-spectral image. We develop a methodology using this Pleiades data to classify greenspace within the city that could act as safe spaces in the event of a disaster event and contribute to mitigating future disaster risk. Specifically, we classify greenspace areas ≥2000 m² that feature low slope (≤4°) and with no tall vegetation (>2 m) to provide areas suitable for tented accommodation. We analyse greenspace provision and distribution alongside population, socio-economic, and hazard datasets.
We find that whist Quito’s historical growth was primarily on flatter former agricultural land, future urbanisation is likely to increasingly intersect with areas of higher landslide susceptibility. We identify over 18 km² of greenspaces that could contribute to disaster risk reduction (DRR) in Quito. However, a gap exists between the provision of DRR and municipality designated greenspace since there was only 7% overlap between our DRR greenspace classification and designated greenspaces. Similarly, only 10% (1.7 km²) of municipality designated ‘safe space’ for use following an earthquake were DRR suitable in our classification. We find a disparity in access to greenspaces across socio-economic groups, though greenspace accessibility was high overall, with 88% (2.1 million) of Quito’s population within 800 m of a DRR greenspace. Our city-wide analysis of greenspace is adaptable to other cities and could form the foundation for discussions on incorporating DRR-orientated greenspaces within current and future urban areas that serve multiple societal and environmental benefits.
References:
1. Baker, J.L. Climate Change, Disaster Risk, and the Urban Poor; 2012; 10.1596/978-0-8213-8845-7.
2. De Sherbinin, A.; Schiller, A.; Pulsipher, A. The vulnerability of global cities to climate hazards. Environment and Urbanization 2007, 19, 39-64, doi:10.1177/0956247807076725.
3. Estrella, M.; Saalismaa, N. Ecosystem-based disaster risk reduction (Eco-DRR): An overview; United Nations University Press: 2013; Vol. 26.
Over the past decades, the world has experienced an accelerated increase in the number of urban areas due to the population growth. According to the United Nations World Population Prospects 2019 report, the world population reached 7.7 billion by mid-2019, having added one billion people since 2007 and two billion since 1994. In addition, current projections indicate that the global population could increase to around 8.5 billion in 2030, 9.7 billion in 2050, and 10.9 billion in 2100. In the case of Latin America and the Caribbean region, its urban population increased about 240% between 1970-2000. Some statistics indicate that currently more than 80% of the population in this region lives in urban areas, and this number probably increases about 85% by 2040. However, most of the cities created in recent decades are characterized by low population density and lack of planning (United Nations, 2019). According to the Latin American Faculty of Social Sciences Ecuador (FLACSO), 3.4 million inhabitants, which represented about 2% of the total population of Latin America, lived in Ecuador in 1950. Currently, Ecuador is in eighth place with approximately 17.4 million inhabitants (INEC, 2020), which indicates a considerable increase in its population since 1950. This reality creates many environmental, social, and economic challenges to overcome. Therefore, the goal of this thesis is to analyze the spatio-temporal patterns of urban expansion among main biomes in Ecuador using LULC data from 1990-2018. This allows to define the urban areas and understand their behavior, needs, and challenges during the urbanization process. Indeed, remote sensing data, GIS techniques, and statistical analysis were the essential components to achieve this objective. One of the most useful methods to determine urban expansion is Change Detection, especially using LULC data, which has been changing rapidly due to anthropogenic activities, such as urbanization and industrialization (Samal & Gedam, 2015). LULC data generated from satellite images (Landsat collection) together with population data from censuses and projections play an important role in urban studies. Once this data was analyzed, it was determined that the urban areas in Ecuador show a constant growth trend that has slowed down in recent years. Due to factors such as the agricultural boom, oil exploitation or internal migration, the Coast and the Amazon have experienced an accelerated urbanization process since the 1950s. Cities such as Guayaquil, Quito, and Nueva Loja had a higher level of urbanization on the Coast, the Highlands, and the Amazon, respectively, than the other parishes.
This study assessed the effect of urban green infrastructure (UGI) on land surface temperature (LST) in Phnom Penh City (Cambodia) from 2016 to 2020. Urban green infrastructure is a comparatively modern concept, which has notable importance, especially when considering climate change effects on urban agglomerations in South-East-Asia. The study also observes the changes between LST distributions over the study period (2016 - 2020). Due to their repeatable, large-scale observations of Earth’s surface, LST records provide an excellent data source for characterizing urban thermal environment, monitoring the urban heat island over time and linking it to the urban green and blue infrastructure. Understanding the geospatial interconnection between UGI and LST can help in planning urban greening and counteracting the effects of urban heat islands. This study has used Landsat 8 data that has 100-meter thermal resolutions. The study estimated Land Surface Temperature (LST) using radiance, temperature brightness and emissivity. Then further statistical analysis was done comparing areas with urban green infrastructure and other areas with other surfaces. The empirical study found that there is a gradual increase in the overall LST over time in Phnom Penh, especially in the South-west part of the city. It was evident that the LST, in the vicinity of UGIs, increased less in comparison to other surfaces. That shows a clear indication that UGI has a relative cooling effect in comparison to the other surfaces. The findings of this study will be used further to understand the causation behind land surface temperature change in that area triangulating LST, different relevant indices and land-use changes. The study can act as a baseline for a continuous spatio-thermal monitoring system for the city, and this approach can be adapted in other cities. Monitoring the thermal condition in urban areas over space and time supports the understanding of the causal explanation of the role of urban green infrastructure for the spatial disturbance of urban heat.
Football is by far the most important sport in the world. Every weekend, tens of thousands of visitors gather in the stadiums of our cities and millions follow the games live on TV. The quality of the games are not only related to the quality of the teams, but often relate to the
conditions of the pitch. The surface of football pitches has changed fundamentally in the last three decades. Where there used to be ash pitches or poorly maintained grass pitches, there are now ultra-modern low-maintenance artificial turf pitches. In this analysis, we have
classified the surfaces of all (+220,000) urban football pitches in Europe and understand now, how their characteristics have changed over the last 32 years.
Ground Truth Data
All European football pitches stored in the crowdsourcing platform "Open Street Map" served as ground truth data. About 10% of the 220,000 pitches contain surface information. The surface types could be grouped into the following seven classes: natural grass, artificial grass, asphalt, sand, red ash, earth and tartan.
Satellite Data & ML Method
For the historical view of football pitch surfaces, we performed the surface classification analysis every 4 years, always in sync with the World Cup (the FIFA World Cup is held every 4 years). For the analysis of today's football pitches, we used Sentinel-2 data. The long time series of Landsat data was used for the historical analysis. The entire analysis workflow, including a) the preprocessing, b) the Machine Learning algorithm training, c) the classification and d) the post-processing, was implemented in Google's Earth Engine. As a machine learning model, the XBoost algorithm was used.
Results & Outlook
The accuracy of the analysis is around 89%. The algorithm showed problems distinguishing asphalt pitches from earth pitches and dry grass
pitches from sand pitches. The artificial turf pitches were always classified with high confidence and very accurately (96%).
However, the surfaces of European football pitches have changed fundamentally over the last 32 years. Today, artificial turf fields
undoubtedly dominate (60% of all pitches) where grass fields used to be. In the European south, with long dry seasons, and in the European
north, where there are shorter and humid growing seasons, artificial turf dominates. The advantage is, that these pitches demand low
maintenance and are playable all year round. In the east of Europe, natural grass pitches still predominates. Within bigger urban areas,
however, artificial turf is becoming more common.
The ratio of rural/urban people killed and injured in earthquakes
Max Wyss
International Centre for Earth Simulation Foundation, Geneva, Switzerland
Large earthquakes with rupture length > 100 km kill the greatest number of people. Therefore, studying the possibility of their occurrence, and preparing for them, should receive top priority. Normally, the landscape that a long rupture may traverse is peppered with small settlements, but few large cities. Figure 1 shows an example of this.
Figure 1: Map showing settlements (dots) with calculated damage due to the Wenchuan earthquake of 2008, M7.9 (epicenter marked by a star). The rupture (black line) traverses an area highly populated by small settlements, with few cities (larger dots). No truly large city is located within 100 km of the rupture. Fatalities are expected only in settlements colored red.
In a sample of 53 large earthquakes, the estimated portion of rural fatalities was more than 90% (Figure 2). The resistance of buildings to strong shaking is lower in rural than in urban environments. This means that even if large cities are present, the less affluent people in the countryside, being more vulnerable, experience more losses proportionally than city dwellers (Zuniga et al., 2015).
The urban earthquakes are seen in Figure 2 at the right side. They are the exception. An example is the M6.6 earthquake of 2003 that destroyed Bam, Iran. The fault ran NS through the city, which is surrounded mostly by desert, with few villages. Another similar example is the M6.3 Latur, India, earthquake where the hypocenter was beneath the city, with few other settlements near it. The most urban earthquake was the M6.9 Kobe earthquake of 1995, the rupture of which ran through an industrial area of Japan, where practically no villages exist.
Figure 2: The % of rural fatalities calculated for 53 large earthquakes worldwide ordered as most rural (left) to most urban (right). The red line is drawn at 50% separating rural (above that line) from urban (below it). (From Wyss, 2018)
Two common properties of these particular urban earthquakes is that they had relatively small magnitudes, that is, short ruptures, and affected almost exclusively population centers. This is not usual, as can be seen in Figure 2 for large earthquakes. However, in relatively densely populated areas with short fault rupturing, urban earthquakes may be more frequent than rural ones.
Colombia is an example of a country in which the combination of population density and rupture length leads to predominantly urban earthquakes (Wyss et al., 2021). For 8 historic ruptures in Columbia, the likely fatalities with the current population and building stock was calculated using the program QLARM (Quick Loss Assessment for Response and Mitigation). It was found that the proportion of rural and urban fatalities was likely 21% and 79%, respectively (Wyss et al., 2021).
Figure 3: Ratio of rural to theoretically estimated urban fatalities. Left in the case of 53 worldwide large and very large earthquakes. Right in the case a single country, Colombia. The contrast means that earthquake mitigation decisions concerning foci on urban or rural population have to be made separately for different countries and regions.
The ratio of rural to urban fatalities in earthquakes will vary by country and region, based on these investigations. Therefore, a government has to assess this ratio for its territory, in order to take a sensible decision on whether to focus earthquake mitigation in rural or urban environments. One fact to consider is that very large earthquakes tend to kill and injure mostly rural people.
Satellite observations could help in several ways to improve alerts for real time responders to earthquakes, and in mitigating earthquake losses. Firstly, INSAR maps of ruptures immediately following an earthquake could lead to far more accurate early loss estimates because at first only a point source is known for earthquake wave radiation. Wyss et al. (2004) have shown that in the Bam M6.6 earthquake, the initial point source was located several kilometers to the west of the true epicenter, leading to an underestimate of the fatalities by more than two orders of magnitude. With an InSAR map of the rupture, the effects of radiation from a line source could have been calculated and provided to medical teams in the devastated area. Secondly, satellite images of buildings could be used as a basis for estimating properties of the built environment.
Clearly, images from space can significantly help with improving estimates of looming earthquake disasters and especially they can greatly improve earthquake disaster response.
References
Wyss , M. (2018), Rural Populations suffer most in great earthquakes, Seismological Research Letters, 89(6), 1991-1997, doi:10.1785/0220180236.
Wyss , M., P. Rosset, and L. Triveno (2021), The ratio of rural/urban people killed in earthquakes needs to be assessed for countries separately, the example of Colombia, Seismological Research Letters, 92(2A), 1036-1051, doi:10.1785/0220200252.
Wyss , M., R. Wang, J. Zschau, and Y. Xia (2004), Earthquake loss estimates in real-time begin to assist rescue teams, worldwide, EOS, 85(52), 567.
Zúñiga, F. R., J. Merlo, and M. Wyss (2015), On the vulnerability of the indigenous and low income population of Mexico to natural hazards. A case study: The state of Guerrero, in Geoethics: Ethical challenges and case studies in Earth sciences, edited by M. Wyss and S. Peppoloni, pp. 381-391, Elsevier, Waltham, Massachusetts, doi:10.1016/b978-0-12-799935-7.00031-9.
More than 4 billion people live in urban areas, and at least a quarter live in slums, informal settlements or inadequate housing, lacking basic services and infrastructure. Spatially targeted, data-driven policies and informed decisions in urban planning may help improve such situations and increase the resilience of the population. Advantages of EO data like high temporal availability, objectivity and increasing democratization of data, are recognized as key benefits for the support of achieving SDG 11 Sustainable City Development and related goals dealing with access to services, such as SDG 6 Clean Water and Sanitation. This is of particular importance in areas difficult to access and for large, fast-growing cities with urban sprawl.
In this context remote sensing data has widely been used to locally map slums and informal settlements, mainly based on building morphology (such as area, shape, height, orientation) and physical characteristics of the near surrounding (such as building patterns). Either approach requires reliable building footprints. Taking into considerations current trends in providing these, we may differentiate three types of sources: 1) ad-hoc generated footprints via AI-supported information extraction techniques from VHR satellite imagery; 2) crowd-based mapping (OSM and HOT(1), Missing Maps and similar) and 3) model-based or algorithm-based solutions from (semi-)public or private organisations for settlements (World Settlement Layer(2), Global Human Settlement layer(3) Building footprints(4), etc.).
In practice, multiple combined social and environmental factor may affect wellbeing and deprivation, such as the risk of natural disasters, exposure to diseases, environmental pollution or barriers to services. Taking such aspects explicitly into account requires a multi-source data environment, defining and extracting suitable EO-based indicators and linking them with other relevant socio-economic and environmental data. This requires techniques to integrate and assimilate data varying in scale and type of measurement, spatio-temporal resolution and extent and combining them meaningfulness to set results in a broader social context.
To realise data-informed development policies also requires location-specific accurate results on a fine resolution. Aggregation on administrative boundaries, such as municipalities or existing urban zoning can blur the actual distribution of relevant information. Thus, we aspire delineating homogenous areas based on the underlying relevant indicators and the urban morphology, but independent from existing boundaries, to avoid aggregation problems. A prototype of a spatial explicit multi-indicator system uses multi-dimensional regionalisation techniques to minimize a priori spatial biases.
Finally taking the geographic context and local as well as general expert knowledge into account is essential for the design of implementation measures. This can be integrated either at the beginning of the process when suitable indicators are chosen to describe the local manifestation, or at the end of the process through qualitative descriptive class analyses of results. Given that well-defined and (semi-)standardised data sets are increasingly globally available, a data-fed expert system offers the advantage of general applicability and comparability and allows for the design of a general indicator framework for urban structures.
(1) Humanitarian Open Street Map team (HOT) [https://www.hotosm.org/what-we-do]
(2) German Aerospace Centre, DLR [https://geoservice.dlr.de/web/maps/eoc:wsf]
(3) EC Joint Research Centre JRC [https://ec.europa.eu/jrc/en/scientific-tool/global-human-settlement-layer]
(4) Microsoft Building Footprints [https://www.microsoft.com/en-us/maps/building-footprints]
The vulnerability of urban environments, as related to their resilience and sustainability, is increasingly evident and our cities are facing key challenges for the future quality of life of both the citizens and the ecosystem. Concrete solutions are needed to improve the environmental conditions of the cities and, therefore, the quality of life of citizens. There is wide scientific evidence of the key role that trees and green infrastructures, ultimately the “Urban Green Spaces”, can play to make resilient our cities in a perspective of socio-ecological sustainability and towards better ecosystem services and quality of life for the urban community (FAO, 2016, SISEF, 2017).
Since its adoption in 2013, the European Green Infrastructure Strategy has attracted great interest across the EU Member States, stimulating the proposal, planning and implementation of a large number of GI projects at both the landscape and the local scale [EEA, 2011], with the latter involving above all urban and peri-urban areas (Maes et al., 2016; Manes et al., 2016).
Moreover, human quality of life, health and wellness greatly depend on how our cities will evolve in the near future. City administrators need to develop a new vision about how cities should evolve and to plan cities where forests and green areas are safe, healthy, adequately managed, diverse, functional and pleasing people. New information tools are needed to enable better planning, monitoring and managing cities’ green assets.
Innovative technologies and methods are becoming available and among these satellite Earth Observation can provide frequent and detailed information about cities on a continental and World-wide scale facilitating the convergence of efforts from public administrators, architects and specialized trees operators and contributing to establish more effective practices for urban forest proper management and wider diffusion (Bottalico et al., 2017). Earth Observation can as well provide innovative monitoring of air pollution and pollen diffusion that directly impact human health and increase people susceptibility to virus infections (e.g. current COVID-19 pandemic).
EO based information services can provide systematic collection and analysis of data for situation assessment, setting targets and defining green master plans as well as for evaluating outcomes and impacts at different stages of implementation. EO based services can be efficiently deployed at world-wide level and can provide a growing business opportunity.
The ESA MAFIS project was aimed at developing innovative procedures based on Artificial Intelligence and Earth Observation data for providing systematic monitoring of forests in natural environments as well as of forests and green areas in urban environments.
The MAFIS monitoring services uses time-series of multi-mission satellite data (Multispectral, SAR, Hyperspectral, and VHR) and in-situ surveys in connection with other geo-spatial data from aerial orthophotos, LIDAR sensing, drone surveys and specialized in situ measurements. All kind of data are organized within a Data Cube architecture and are processed and integrated by using various AI techniques. The satellite data, the AI results and the processing are performed also exploiting the ESA Network of Resources infrastructure, in particular leveraging the Euro Data Cube, the Forestry TEP and the Urban TEP. The urban and peri-urban forest and green areas monitoring services, experimented within the MAFIS project, are dedicated to: i) the update of cities tree’s inventories (including tree species multispectral, hyperspectral and multi-temporal classification, and tree structure estimation), ii) the estimation and spatial assessment of various ecosystems services (carbon sequestration, pollutant removal, thermal comfort, pollen risks, etc.) for green areas planning purposes; iii) the monitoring of tree status and health for maintenance actions identification and prioritization.
The operational workflows for the monitoring of urban and peri-urban forest and green areas have been tested in several European cities and relevant examples of the MAFIS services have been produced over Florence, Rome, Milan, Bergamo, Košice, Bonn, Wien, Paris, Lisbon.
The space ecosystem is experiencing exponential transformation of business models, with new space players, including start-ups and Small and medium-sized Enterprises (SMEs), entering space markets at an unprecedented rate. New applications and business models which would have been unthinkable only a few years ago are now standing up to provide breakthrough solutions to today’s environmental, social and economic challenges, benefitting the public, private sectors and society at large.
The Telecommunication and Integrated Application (TIA) Directorate of European Space Agency (ESA) offers well established frameworks to capture such potential innovation and deliver socio-economic benefits to the society at large through its Business Application and Space Solutions XL (BASS) programme. ESA TIA directorate supports European industries in the key pre-commercialisation stage and on their route to the market. By providing technical and business support, zero equity funding and access to its partners’ network, the programme supports not only the EU start-ups and SMEs in their innovation, but also collaborates with mid-caps and corporates to define, develop, verify and validate innovative and sustainable applications/services utilising space assets and fulfilling customers and end-users’ needs.
A close cooperation with users and potential stakeholders, service providers and other players is a key element for the BASS implementation. By leveraging international cooperation in key verticals, the programme aims to identify and bring to operational use innovative and wide-ranging applications addressing key industry sectors challenges and delivering real, measurable benefit to society, business and the economy. BASS acts as catalyst of new demand-pull opportunities developed with non-space actors such as green and circular economy, energy, transport, and health.
Over the recent years, BASS has initiated dedicated initiatives leveraging international cooperation across sectors, to address market needs and trends and increase EU industry competitiveness internationally by helping establish innovative space downstream solutions across a wider range of user communities and vertical markets.
BASS has proven ability to attract a wider range of non-space companies into the end-to-end space value chain, stimulate demand and mobilise co-investment from non-space verticals, including private investment for the company’s growth as they scale-up.
Verticals challenges in terms of sustainability are a key element of BASS strategy and are addressed through dedicated initiatives to establish space applications contributing to deliver net positive environmental impact, as well as to UN’s Sustainable Development Goals (SDGs). Green Value and Sustainable Mobility (GVSM) is a thematic initiative that leverages on activities and projects performed in ARTES.
One of the key objectives of TIA in support of the green transition and of the future ESA accelerator “Space for a green future” is to build partnerships with champions, early adopters and anchor users to foster development and utilization of upstream and downstream end-to-end connectivity solutions complemented by big data/AI and other space resources for delivering green impact.
The main partners are the players committed to sustainable green economy, including international agencies and relevant national government bodies / Ministries; industrial associations and bodies, vertical sector corporates (e.g., automotive, energy, environment); users / customers, NGOs, bodies and foundations engaged in the Green Deal; and private investors.
The paper will describe Business Application and Space Solutions XL approach to drive purposeful innovation through international cooperation, and its efforts in contributing to advance the growth and global competitiveness of the space downstream and new space industries of ESA Participating States, while also supporting the transition into a sustainable, green, and digital Europe.
The collaboration between Brittany and Wallonia, two regions’ actives in remote Sensing and in particular in Copernicus, exist since few years. It has developed through several projects such as the respective participation to Nereus, EOGEO events dedicated to Skills development in Earth Observation and Copernicus User Uptake, and both contribution in the Caroline Herschel Framework Partnership Agreement (FPA) between the Commission and Copernicus Participating States. This FPCUP is an essential tool in the Commissions‘User Uptake Strategy. The inspiration from the exhibition NEREUS “Space Girls Space Women” puts the two regions to build a consortium, including EARSC, to promote the visibility of women in the Copernicus user uptake, the project Women in Copernicus.
These two regions have been active in the role of Copernicus relay in their respective and specific thematic fields. But they especially focus their actions on the public authorities (Figure). This is a strong similarity between these two regional experiences. They both done extensive efforts to create activities in working groups involving the public services. These regional working groups organise meetings with users, either only with public authorities to better understand their needs or with private companies to establish bridges between the needs and the service offer. These two categories of meeting feed regularly the respective audience with updated EO information. They also make the promotion of EO services towards the authorities. The local authorities can only be interested by EO products through examples coming from their region in their own language. If EO data propose effective tool to address our challenging future, a changing of the mind and of their working habits are essentials for regional local authorities. According to I D’Auria (2018) [1], it is public policy that usually pushes an entire industry to embrace a new business model. Collaboration with Nereus but also in the Cordinet -CopHub.Ac demonstrated the difficulties but also the success stories of EO integration by public authorities. Experiences from Brittany and Wallonia illustrate and propose a method to involve these essential actors in the Copernicus system. Public services could adopt another way to integrate EO geodata in their decision and this needs a lot of communication.
For communication, the initiative of EO evangelist initiated by EARSC is an important and interesting idea. The Women in Copernicus project created link with other gender initiatives such as geochicas. The international collaboration and bridges between actions provides new insights for a stronger Copernicus ecosystem. The WIC project initiated by these two regions with universities partners, illustrated the human interest of each Copernicus member in having a strong link to the users and a clear feeling that your job is contributing to societal challenge. The job satisfaction in Copernicus provides an effect on the personal sense of belonging to the system. The understanding and propagating of new values could bring another approach in Copernicus. EARSC could be an essential actor to promote this future vision of EO with the close link to the users and more diversity, equality, inclusion and belonging (DEIB). EO evangelist with, a particular link with geochicas, is a perfect way to diffuse the positive effect of the community feeling toward better common results for all.
[1] https://www.geospatialworld.net/article/can-satellites-link-happy-civil-servants-to-citizen-confidence/
The typical products of the EO industry are still (digital) maps. However, a map is a tool, that can often only be interpreted by expert users. The EO industry discusses ways to increase its user community and how to make the information that can be extracted from EO available to ordinary people. We believe that AI-based voice and language technologies applied through smart assistants and chatbots can become important gateways to EO-derived information and MESSAG-EO is our solution for that.
'Voice' services are steadily increasing. Analysts predict that voice applications will become the main interface with technology in the future making the use of 'finger-controlled' smartphones obsolete. With MESSAG-EO digital maps are translated and their interpretations are delivered as voice messages. Thus, user communities can be reached who do not need to be able to read and understand thematic maps. MESSAG-EO combines latest geospatial analysis and Natural Language Processing (NLP). Whereas cloud-based analysis is the common approach for the exploration of big geospatial data, the design of complementary Voice User Interfaces (VUI) and potential conversations between humans and the machine are more challenging.
For our prototypes we made use of Copernicus services (CAMS and CEMS - EFFIS) and integrated own analytical results from Sentinel-2. Copernicus services and the high repetition of Copernicus satellites are an important basis for MESSAG-EO because they allow to obtain and generate up-to-date information. For the text-to-speech transformation AWS Alexa is used.
The voice- and language-based delivery of EO-derived information is applicable to many different domains: a farmer can get daily news about soil moisture levels and crop growth on his fields; a driver can receive live weather alerts along her route and during presentations of the city council live data about the evolving urban heat island and the related risk for vulnerable citizens can be shown.
Voice and language technologies can democratize the use of EO data and deliver its precious information to the people. We will showcase our latest developments related to voice-assisted geospatial analysis using smart assistants and chatbots.
The need for establishing rules for the protection of the environment has always been a priority policy worldwide. Focusing on the European Union area, the establishment of a common environmental law policy for all the members of the European Union was adopted not earlier than 1972. Currently, the European Union’s environmental Law and hence the corresponding Greek Environmental law is a separate and quite complex body of law in the EU and the Greek Constitution.
Despite the volume of environmental acts and legislation, there are still considerable challenges to implementing legislation in a timely and effective way to combat environmental crimes. The Global community has already realized the gap between the hundreds of legal documents and compliance with the Law, and it is now focusing its efforts on developing new enforcement mechanisms by utilizing advanced scientific tools from related sciences, and more specifically, Remote Sensing.
In this context, the research aims to investigate reliable remote-sensed-based methods for monitoring the environment and provide scientific evidence to support environmental law enforcement. Moreover, the research focuses on finding ready-to-access ways to disseminate the results of the analysis to be fully exploited by criminal justice mechanisms.
The prime objective of the proposed methodology is to develop EO-change detection algorithms for monitoring mining activities in forested areas. In general, the project focuses on developing a “historical monitoring” tool for protected ecosystems in forested areas, which are subject to significant and often persistent environmental pressures. The proposed EO-algorithms were developed using cloud computing techniques and more specifically, by utilizing the capabilities of the Google Earth Engine Platform.
Within that framework, the high-resolution Satellite Imagery of the Sentinel Missions is the most suitable data to utilize. Moreover, to complete the time-series of satellite imagery prior to the Sentinel Missions (i.e., prior to 2014), Landsat imagery is also used.
The proposed tool provides a step-to-step approach to detect a potential environmental violation in forested areas. In this context, the second main parameter of the platform includes the codification of the existing environmental legislation. Namely, all existing national environmental laws, guidelines, policies, presidential Decrees etc. for forests and all the corresponding protected ecosystems in forested areas is organized using WEB-GIS tools where the end-user can acquire additional information on the requirements of each existing law and the corresponding alert values.
Moreover, the platform includes additional spatial and thematic data (i.e., the extent of protected sites, additional information of specific treats for forest areas, national and regional boundaries, and more), so the user can detect the area of interest he wants to apply the proposed methodology.
Finally, to disseminate the results of the study and support policymakers, stakeholders, the jurisdictional sector, and other end-users to combat environmental crime, an easy-to-access approach was developed using an interactive WEB-GIS and dashboard application. Namely, the end-users can view the results of the analysis and acquire information on the existing environmental law. Finally, the application supports the acquisition of maps, and other information charts that can provide solid evidence in court and contribute to environmental impact assessment.
Concluding, the main idea of the project (see figure 1) is to provide a monitoring tool where the end-user will define the area of interest, the time period, and the type of environmental parameters needed to monitor, and the system will return the results of each corresponding EO-algorithm which is suitable for monitoring the defined environmental parameters along with all the essential information to help him “make a case” to combat environmental crime.
The proposed methodology and the corresponding EO-algorithms were tested in different forested areas around Greece, and more specifically, on protected forest ecosystems and other fragile areas subject to environmental degradation.
In that spectrum, the wider area of Cholomontas mountain in Central Macedonia, Greece, has been selected as a case study representing a typical area where the enforcement of the environmental legislation can be assessed using the above-mentioned monitoring tool.
Sprawling over a 6.700 acres area, Cholomontas Mountain is one the biggest protected NATURA2000 sites in Central Macedonia Greece, and an area that is subject to potential risks due to the ever-growing tourist and mining activities that has altered the natural landscape. In addition, the area is also well-known for its logging activities, which is a main economic sector for locals.
Namely, mining activities in the wider area are blooming over the past 15 years. Nowadays, there are three active mining sites, located in the heart of Cholomontas Mountain, near the tourist area of Olympiada, Chalkidiki. These activities are subject to the corresponding regulations and more specifically, to the Environmental Liability Directive. In addition, the mining sites are adjacent to five wildlife refuges (see Figure 2).
The EO-algorithm was implemented for the historical monitoring of the area for the time period 2000-2020. Namely, the EO-algorithm detected all the alterations caused due to the mining activities by monitoring the extent of the mining sites and thus the extent of forest/non-forest areas.
The results of the analysis were quite satisfying with slight commission and omission errors. The overall accuracy of the “mine-extent” EO algorithm is 93.9% for the detection of mining areas, with a corresponding Kappa Coefficient 0.780. The results of the analysis were incorporated in the interactive WEB-GIS Dashboard of the developed platform (see figure 3).
Maritime domain awareness is becoming ever more important with e.g. global warming opening up previously closed seas, like the arctic Northeast Passage. As a consequence, satellite-based ship detection has gained increasing attention. Using a Synthetic Aperture Radar(SAR) satellite, it is possible to assist e.g. search and rescue operations, acquire domain sovereignty, search for illegal vessels and more. Recently, Deep Learning techniques employing different Neural Network architectures has become the standard methodology for dark ship detection. These networks are almost exclusively supervised, meaning a labelled data set has been developed and used for training. A grievous problem then occurs when the labelled data set is not representative for the true data.
SAR images occasionally suffer from degradation due to the interference of ground-based-radars of similar frequency transmitting RF signals towards the satellite. This is called Radio Frequency Interference (RFI). In Dark Ship detection models, RFI SAR images are in the worst case either discarded prior to inference or classified as not being a ship due to un-representative data. On land, C-band RFI originates from e.g. weather radars, telecommunication, air-surveillance or anti-missile systems. On the ocean, they primarily originate from ships carrying C-band radars. In short, dark warships with active C-band radars might not be detected in an automatic framework utilizing C-band SAR radars like e.g. Radarsat, Sentinel-1 or Iceye.
In this work we describe a detection strategy for RFI signals in C-band SAR images by using both cross- and co-polarized channels and discuss its application to both stationary land-based and moving ocean-based RFI signals. The detection method is applying both a data-driven machine learning approach and the wavelet transform for the RFI peak detection. We will further discuss the differences in finding RFI in images acquired from vertical and horizontal transmitted pulses. We then argue that the RFI detection and information found are absolutely necessary in attaining maritime sovereignty and to surveil e.g. certain foreign navy ships. Furthermore, the methodology can be applied for other frequencies and thereby increasing the number of RF signals detected. This can in turn help to counter e.g. environmental crimes or crimes against humanity.
The RFI detection approaches, which are currently used for mainly land-based RFI detection from VV polarized transmitted pulses, are only partly applicable. From C-band SAR, like the Sentinel-1 mission, emphasis has been on detecting RFI in predominately VV polarization in order to mitigate the RFI signal for further processing. Only rarely has the information in the RFI signal been used. In our work, emphasis has been on utilizing and analyzing the RFI signals to increase our knowledge and thus increase the maritime security.
With the increased frequency of shipping activities, such as tourism and transport of freights, navigation safety has become a major concern. Even if new technologies have already supplied aids to pilots for navigation risk reduction, the International Maritime Organisation (IMO) reports that the majority of accidents could have been avoided by providing suitable input to the navigation decision-making process — this is where Earth Observation data can represent complementary information, to improve traffic monitoring and guidance along safe routes.
NARAS is an ESA project carried out by S.A.T.E. Srl (as prime contractor), MARIN (the Maritime Research Institute Netherlands), and Planetek Italia Srl, aiming at the improvement of safety in critical maritime operations.
This is attained through the extraction of Preferred Routes for ships (described by environmental-dependent trajectories with space-time way-points tolerance), using Big Data techniques on large sets of Automatic Identification System (AIS, ship positions obtained from GNSS—Global Navigation Satellite System—receivers) and Vessel Traffic Service (VTS, marine traffic monitoring systems established by harbour or port authorities) data.
In an autonomous shipping scenario, vessels navigate following prescribed routes, adaptively changed based on risks and environmental conditions. GNSS and its augmentation systems will represent the key enabling technology to attain safety of navigation, especially considering systems providing accuracy and integrity information.
However, they may not be sufficient to assure such safety when ships are not using good quality GNSS receivers, or when they switch off positioning systems. Also, “non-collaborative” objects (ships/objects in the sea not transmitting AIS), such as natural and artificial debris, may represent possible hazards.
Therefore, Earth Observation (EO) data can be used to detect sailing vessels with no AIS receiver and to compare the AIS position versus position detected in optical and SAR images. The combined use of such EO data will increase available information and could provide support to vessel detection (shape, dimension and route).
NARAS aims at expanding the Preferred Route concept by exploiting the combination of EO and GNSS to improve navigation risk modelling and provide near real-time updates on preferred routes for navigating ships.
These targets are going to be achieved by combining AIS/VTS data with Synthetic Aperture Radar (SAR) and optical images, when the latter are available, to improve the detection of unreported features in the maritime AIS system and the statistical evaluation of the accuracy of current position data available through the AIS system, or their alignment with EO extracted positions.
The usage of the combined EO/GNSS information can provide fundamental support either for the shore assisted navigation or, in the future, for ships’ autonomous navigation.
Employing the potentialities resulting from the combined exploitation of EO and AIS data, NARAS has the purpose to:
• Extract preferred routes that can be suggested to ships in a certain area, based on the ship’s characteristics and on the environmental conditions;
• Observe objects in the sea not transmitting AIS (i.e. non-collaborative objects that are not transmitting an AIS signal, such as wind/solar offshore farms or containers);
• Monitor areas in the sea (e.g. to monitor an area or objects lost at sea after an accident);
• Verify quality of the AIS (e.g. to verify the quality or the precision of the AIS signal or to check if AIS base stations are working correctly);
• Predict trajectories of floating objects (as the drift of an object, e.g. a container, lost at sea depends on its drift properties - this might be more challenging as it requires high-resolution satellite images to estimate different characteristics of the vessel);
• Adjust in real-time the preferred routes recommended to ships, based on the position and trajectory of non-collaborative objects.
The Earth Observation dataset collected within the frame of the NARAS activities consists of 47 EO SAR products, acquired over three selected Areas of Interest:
• Venice, North Adriatic Sea (Italy),
• Rotterdam, North Sea (the Netherlands),
• Wadden Islands, North Sea (the Netherlands),
Using three different SAR missions:
• COSMO-SkyMed
• ICEYE
• TerraSAR-X
NARAS EO products were collected from commercial European data providers in the frame of ESA’s Earthnet Programme, which made selected datasets available to users subject to ESA’s Cat.1 policy for ESA Third Party Missions (TPM).
Additionally, data from the Cosmo-SkyMed mission over the Italian territory have been provided by the Italian Space Agency, in the framework of the MAP ITALY project.
In this contribution, the results from the analysis performed in the framework of the ESA NARAS project are presented, demonstrating the improvements of the security in traffic monitoring deriving from the adjustments to the preferred paths assigned to ships in that area, based on the evolution of the traffic conditions by detecting the non-AIS objects in the same area thanks to EO data.
The world is still a hotbed of hostility and violence in many places. Due to a lack of reporting and the absence of independent institutions in the past, crimes thus only came to light years later. In recent years, crimes are not only reported independently via online & social media, the possibility of verifying these unbiased by using an abundance of available imagery data and automation in processing algorithms increased rapidly.
The digital globalization with its possibility of almost unlimited information access and dissemination, simultaneously creates an uncertainty in reliability. This is in particular for sensitive issues the case, where events may be reported contrarily, depending on the source of information and its interests. Hence, it is on the reader’s side to interpret it. An objective verification, evaluation and weighting of information seems inevitable before introducing these into analysis or decision-making processes. This becomes even more important in remote areas with reduced accessibility (e.g., due to conflicts), where the determination of the validity of information spread by different sources (ranging from eyewitness, local media up to international NGOs and media) is limited.
Here, Earth Observation (EO) using satellite image data provides an appropriate and objective instrument for information retrieval and its verification in the security and fragility context. The combination with online news and access to social media platforms lifts the EO potential up to the next level of information retrieval, as analysis processes are streamlined and EO data can be considered to reveal fake information or fake news.
With the implementation of different customer projects, we have developed an Earth Observation and Media Analysis Service (EOMAS) capability to streamline the process of information retrieval and to augment information from online and social media to combine both information sources.
This service focuses on event localization and information retrieval from online news / social media, followed by earth observation analysis for verification purposes. The first is realized by the implementation of a context driven content and location search engine based on a vast database of online news articles. Keywords related to an event or category (e.g., fire, assault, kidnapping) and locations (e.g., countries or regions), where an event is reported or assumed, are entered in a web-based dashboard user interface. The result of the search provides potentially relevant articles with additional information that allows an analyst to decide about their relevance. A summary of the retrieved article information is provided as well as the geo-localization of locations mentioned in the articles, whereby both features are extracted by applying Natural Language Processing (NLP) methods. Comprehensive information filter allows to reduce the amount of less relevant articles, while map visualization and export options of locations as well as automatic provision of available HR satellite imagery (i.e. Sentinel, Landsat) accompany the process and support the workflow.
After narrowing down the region of interest towards potential locations of interest and a first overview about the thematic context of the articles, a holistic approach of information extraction and earth observation methods is used for the analysis part of the service.
We examined the opportunities of GEOINT techniques for the verification of OSINT/SOSINT data like Online News or Twitter. Thereby, we were able to demonstrate the strength of Earth Observation data as an independent source of information and an instrument to verify reports or to reveal fake news. While complex interrelations are commonly investigated by analysts, legacy remote sensing and state-of-the-art AI approaches are used for automated information extraction (of topographic and thematic nature) and the analysis of temporal changes based on EO image data.
To face various use cases in the context of law enforcement and security we built up a collection of legacy remote sensing techniques and state of the art Deep Learning approaches. Furthermore, we provide first insights to state-of-the-art Deep Learning based supervised change detection methods that tackles the challenge of limited amount of training data.
The Historical PowerTrack Enterprise API from Twitter provides an additional extensive information source and can be assessed by social network analysis and NLP methods (e.g., sentiment analysis). To be able to deal with the large amount of data, a high-performance NLP processing hard and software stack has been implemented.
Two examples showcase the successful service implementation. In the first example, online news was scanned for violent actions in north-east of Nigeria, where various groups marauding the region and set villages and settlements on fire. Reports based on eyewitnesses existed, but the real spatial extent of affected villages was unclear. By analyzing the online news and possible locations of events followed by a satellite image analyses, the reports could be confirmed and further affected villages in the surroundings were revealed.
The second example focusses on the integration of Twitter data into the analysis process. It shows the riots in the wake of the murder of George Floyd in Minneapolis, USA. We have shown that the combination and verification of Twitter data with Satellite images gave a more comprehensive understanding of the event, while satellite data served as an independent source of verification.
Part of the development regarding online news as presented in here received funding by the European Space Agency (ESA) within the EO-Science for Society (EOEP5 Block 4) – Expanding Demand for EO Derived Information, EOLAW Project led by GMV Skysoft. Within the project, IABG was in charge for the services developed in the Crimes Against Humanity thematic domain.
The pipeline for the Twitter data analysis and EO based information verification was developed by IABG within the SPARTA project led by UniBw München.
With funding from the European Space Agency’s Earth Observation programme, Earth-i has been exploring the use of very high resolution (VHR) satellite videos and “fast revisit” intra-day satellite imaging for civilian security applications, working with the European Union Satellite Centre (EU SatCen) as the end user. Results have been extremely promising and Earth-i is now actively working to commercialise the capability. In this presentation Earth-i will present the key findings from the project, with examples of the fast-revisit images and videos acquired, the analytics applied, and the products generated for civilian security end users to exploit.
Rapid revisit VHR imaging from satellites allows security services to better assess and understand the activity taking place in an area of interest. With a single image, it is possible to detect features and gain an understanding of the situation, but often difficult to infer what activity might be taking place. With a time-series of images taken in quick succession, it is possible to gain a deeper understanding of activity and interactions taking place. It also maximises the possibility of having cloud-free images over the area of interest, not only because clouds may move during the time between the different acquisitions, but also because the different images may be acquired from different viewing angles.
Very high resolution (VHR) satellite videos have similar advantages as the rapid revisit “still” imagery, but video enables additional capabilities. For example, videos enable speed and direction of moving objects to be more accurately ascertained. The different acquisition angles in the video can also help to reveal obscured elements that might not have been visible in a single still image, and also the generation of high-resolution 3D models.
With the application of machine learning techniques, it is possible to set up automated processing chains to detect specific features in the rapid-revisit images and videos, and to detect movement – either in a single video, or between successive still images. Such automated detection helps to provide an additional level of information to the civilian security services, for example knowing that a vessel has entered or left a port, or that a vehicle has arrived at or departed from its parking bay.
In this project, Earth-i made use of high resolution videos and intra-day VHR imagery from Planet’s SkySat constellation, as well as Pleiades VHR data from Airbus and the high resolution SAR data from ICEYE. Earth-i implemented AI algorithms for automated detection of vehicles, ships, aircraft and other objects including temporary structures such as tents, in both the still and video EO data. Video data was further used to demonstrate the construction of 3D terrain models, and to explore the possibility of detecting and tracking moving objects.
By the end of the project, a very promising set of results had been demonstrated in terms of showing the value of video and rapid revisit Earth Observation imaging from satellites for civilian security applications. Further work was needed in terms of operationalising the service and making the outputs ready for operational use by the SatCen. Further evolutions were also identified to improve the reliability and consistency of the output results. Following conclusion of the project, Earth-i has continued to invest in the development of the demonstrated capabilities, as well as in promoting the capabilities and securing new business based on showcasing the work conducted.
In this presentation Earth-i will show the key outputs from the project, with examples of the fast-revisit images and videos acquired, the analytics applied, and the products generated for civilian security end-users to exploit. Earth-i will also showcase the interactive, online cloud-based portal that it has developed for delivery of such data and products to end users, and will highlight some of the exciting new contracts it is pursuing and has secured in markets outside Europe to exploit the demonstrated technologies.
The European Marine Strategy Framework Directive (MSFD) defines in Article 3 Good Environmental Status (GES) as: “The environmental status of marine waters where these provide ecologically diverse and dynamic oceans and seas which are clean, healthy and productive within their intrinsic conditions, and the use of the marine environment is at a level that is sustainable, thus safeguarding the potential for uses and activities by current and future generations” [1].
To help Member States interpret what GES means in practice, MSFD sets out eleven qualitative descriptors which describe what the environment will look like when GES has been achieved. Descriptor 5 implies that eutrophication is minimized, especially adverse effects thereof, such as losses in biodiversity, ecosystem degradation, harmful algae blooms and oxygen deficiency in bottom waters. Most eutrophication assessment methods recognize that the immediate biological response is increased primary production, reflected as increased chlorophyll a (chl-a) and/or macroalgal abundance. Therefore, Chl-a concentrations (biomass) in micrograms per litre (μg/l), in the water column, may be used as a primary criterion, with respect to descriptor 5 [2].
The GES service is currently being developed by Deimos under the framework of the Atlantic Cities: Smart, Sustainable and Secure Ports and Protecting the Ocean project (ARIA3 [3]), which aims to develop and deliver to the end-user community a number of customised EO-based information services based on three specific areas: Climate Resilience; Atlantic Cities and Ports; and Protecting the Ocean. As the project is developed within the European Space Agency (ESA) Atlantic Regional Initiative, one of its main goals is to increase the uptake of Earth Observation (EO) applications and services within stakeholder communities that represent the different Atlantic regions.
The GES service provides a single access point to a long time series of key parameters: Chl-a, Primary Productivity and Sea Surface Temperature (SST), customizable according to user needs to comply with the monitoring requirements of MSFD. It performs a statistical analysis of the time series, as retrieved from Copernicus Marine Environment Monitoring Service (CMEMS), for an Area of Interest (AOI) defined by the user, generating maps of monthly and annual mean for a reference long time series that represent the typical conditions in that area (e.g. 2000-2010). The user is able to define what are the years to be included in that reference time period. After that, and over the monitoring time interval defined by the user, the service is able to provide daily statistical indicators (mean and percentile 90), as well as an anomaly with respect to the calculated longer time series indicators. In case the anomalies cross pre-defined thresholds an alarm function is triggered and an alerting email is sent to the user.
The GES service chain is anchored in the capabilities provided by services4EO, the Exploitation Platform developed by Deimos, which offers a comprehensive solution for the quick generation and deployment of EO-based applications. The CMEMS collections identified as input for GES are ingested (archived and catalogued), in the reference platform. The service is made accessible to the user through a Service Dashboard in a web portal, where the user can select the AOI and time interval and visualize the results generated by the service.
A demonstration exercise of GES service is previewed to be executed within ARIA3 with the Agricultural Research and Rural Extension Company of Santa Catarina – EPAGRI (Empresa de Pesquisa Agropecuária e Extensão Rural de Santa Catarina), a public company, linked to the Government of the State of Santa Catarina, Brazil, responsible for research and extension work in agriculture, aquaculture and fisheries. The pilot objective is to provide data to support a monitoring program of more than 500 aquaculture areas distributed over more than 100 km of semi-enclosed coastal environments. By providing both historical and near-real-time information regarding water quality and combining it with EPAGRI’s extensive record on the presence of Harmful Algal Blooms (HAB), this pilot application shall increase the knowledge and facilitate the management of HAB-related risks in the area.
Bibliography
[1] European Commission, 2008. Directive 2008/56/EC of the European Parliament and of the Council of 17 June 2008 establishing a framework for Community actions in the field of marine environmental policy (Marine Strategy Framework Directive). Official Journal of the European Communities L164/19 25.06.2008.
[2] European Commission, 2017. Decision 2017/848 of the European Commission of 17 May 2017 establishing criteria and methodological standards for evaluation of good environmental status. Official Journal of the European Communities L125/43 18.05.2017.
[3] Wyniawskyj, N. S., Ribeiro, P., Ferretti, S., Petit, D., Grosso, N., Podder, P., & Aparicio, S. (2021, July). Supporting Atlantic Cities and Ports Through Earth Observation. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 7553-7556). IEEE.
It is estimated that 80% of all floating debris in the ocean are marine plastics. These plastics are eventually ingested by marine species or act as a trap causing severe injuries and death for marine life. Marine plastic pollution is a substantial threat to wildlife and human health.
The ability to detect floating plastic debris in short periods after they enter the ocean is a mitigating factor, which avoids the degradation of the macroplastics into smaller pieces (i.e., microplastics), and the consequential threat to the food chain. Leveraging on the breadth of experience of the "distributed AI system for marine plastic debris monitoring" (SMART) consortium on deep learning, at its first stage, the SMART proposal aims at identifying different types of floating debris from the satellite. However, the project is not limited to the already challenging task of identifying floating plastic debris from satellite images. SMART is an intelligent framework based on physics-informed learning, which combines automatic identification and classification of floating plastic debris from satellite images, spatiotemporal modelling of plastic accumulations with high-resolution numerical ocean modelling and physics-guided machine learning. A distributed system of sensors mounted on low-cost marine autonomous vehicles will be deployed for long-term validation of the model results. This unique combination will allow us to bypass the need of running full ocean numerical models at small-scale simulation grids, which brings numerical instabilities and assesses uncertainties about the spatiotemporal predictions. Instead, the outcome for the end-user will be a probability of a plastic occurrence map at any time step required in the past and the future. The probability of a plastic occurrence map will allow authorities to devise strategies for ocean clean-up while making decisions under uncertainty.
The project focuses on two pilot sites in the North Atlantic, and the results obtained will be validated in situ using low-cost marine autonomous vehicles, which will collect samples at key sensitive regions predicted by the model.
Over the recent years, there has been an increasing need to better quantify the exposure of cities, ports, people and infrastructures to extreme events, such as those associated with sea level rise and storm surges/floods. This is especially true for those locations that are strategic, such as ports, and are also geographically isolated, such as the Azores, hence reducing their resiliency and increasing their exposure. Towards this effort, here we report the first results of a framework aiming at quantifying the economic impacts of historical storms and SLR applied to the Port of Ponta Delgada, Azores, in Portugal. The Azores is an archipelago composed of nine volcanic islands of the North Atlantic Ocean, about 1,400 km (870 mi) west of Lisbon. Ponta Delgada is the largest municipality and economic capital of the Autonomous Region of the Azores in Portugal. It is located on Sao Miguel Island, the largest and most populous in the archipelago. The municipality is the principal port of entry for goods entering and people arriving in the Azores. Here, we present the results of a framework that, building on EO data (e.g., Copernicus products, Sentinel, etc.), in-situ buoy measurements, GIS products and data from the Ports of Azores related to marine traffic and infrastructure, provides estimates of the impact of an extreme event on the port activities and on the potential exposed infrastructures (e.g., port areas, containers). The method relies on the use of random forest to assess the potential impact of the selected events on the port activities and uses ad-hoc developed GIS dataset to estimate the exposure to infrastructures. During the presentation, I will highlight the current results, the possibility to expand the results to shift to a near-real time service development for other areas. We will also discuss the major limitations I encountered, the importance of a strong collaboration with stakeholders and the potential next steps to overcome such limitations.
Space Agencies such as the European Space Agency possess a deeply embedded, long-term perspective that can help address global grand challenges and achieve sustainable development. However, this perspective is under-utilised. For example, data arising from long-term investment by ESA can be used to support the suite of new EU-based marine strategies, plans and management regulations, which have opened significant opportunities for Earth Observation (EO) in the Blue Economy arena. The challenge for the space sector, including governance and space agencies, is, how to best support and enhance knowledge transfer, innovation, and commercial opportunities in such a fast-paced, and rapidly expanding arena?
Technological Innovation Clusters are vehicles to rapidly identify and address potential bottlenecks in the innovation ecosystem, and realise EO-maritime solutions. They can form a bridge, which links long-term sustainability vision and strategy, with short-term commerce and opportunity. The ESA-funded "Blue Economy: Innovation Clusters, Atlantic Natural Resources Management and Maritime Spatial Planning" project is building on its suite of three demonstration activities, to chart a way forward for knowledge transfer from EO into a sustainable blue economy. The demonstrations are examples of the far greater potential that EO-based services provide. With this in mind, the consortium is working with a wide range of governance, private industry, education, and Atlantic observation stakeholders to develop a suite of recommendations. These would enable ESA to strategically target future animation efforts, marrying short term opportunities with long-term vision.
Structured discussions and stakeholder data analysis have identified exciting innovation avenues, cutting-edge knowledge transfer opportunities, and a series of restraints on innovation and commercial development, which targeted investment, and coordination with Innovation Clusters could overcome. These perspectives will be delivered as a roadmap to ESA in the last quarter of 2022. This will provide guidance and recommendations how agencies such as ESA can strategically harness the potential of Innovation Clusters, to maximise EO-derived data opportunities and uptake by the Atlantic maritime sector.
This presentation gives a brief outline of the approach used in the Blue Economy project with regard to the Innovation Clusters road-mapping, and aspects which could be integrated as good practice in other initiatives going forward. It then explores the range of draft recommendations produced by the 1st stage of stakeholder engagement, presenting them under four themes: (i) opportunities for EO sector innovation, (ii) opportunities for inter-disciplinary knowledge transfer, (iii) opportunities for inter-sectoral synergies, and (iv) identified barriers to unlock our maritime EO potential.
Plastic pollution is a global threat and affects ecosystems services and tourism. The UN has raised awareness of the problem and asks for global monitoring through SDG 14.1.1b. plastic debris density. Two levels of information are necessary: a global dataset for earth observations and modeling and local, national data for more detailed reporting through the regional seas program. In the same line EU has proposed the indicator D10C1 through the Marine Strategy Framework Directive (MSFD), which examines the quantities and the properties of marine litter on the coastal environment.
The traditional reporting protocol is organized through individual transects on the beach, recording the litter's presence. This is a widely used method, adopted from several regional sea conversions, and provides an overview of the plastic pollution on the coastline. However, manual transecting, collecting, and discriminating plastics require extensive fieldwork and are limited in scale. On the contrary, state-of-the-art technologies like drone images and artificial intelligence can locate the plastics and create litter density maps on the coastline. Also, citizens can help to enlarge the observations through a dedicated drone acquisition protocol, where a web platform can be used for data upload and automatic processing.
The current technology can detect plastics, discriminate them in seven basic categories, position their existence in a geographic grid and automatically create litter density maps. The technology has been developed and tested for Mediterranean beaches. With the current work, we aim to test the developed technology for the Atlantic region and specifically on Portuguese coastal areas. Several reports indicate the problem of marine litter in the Portuguese beaches and ports, where the maritime economic sector and its associated services are very relevant to the national economy. Local measurements indicate a large number of plastics in the coastal area requiring the guidance of the cleaning efforts towards areas with a significant amount of litter.
The method resulted in marine litter accumulation maps in the coastal area, using drone imagery and deep learning algorithms. The aerial images were collected through a dedicated protocol to acquire drone imagery from non-experienced citizens using commercial drones. Datasets were uploaded into a web platform where all the preprocessing (quality check, georeferencing, usefulness) and analysis occurred (deep learning algorithm). Marine litters were classified into seven categories, and marine litter density maps were created. The density maps are automatically reported to spatial data infrastructure, ideal for time series analysis.
This session aims at showcasing the on-going work from the Atlantic Regional Initiative project office on addressing stakeholder engagement, promotion and communication actions linked to the ESA developments and follow-on workshops. Along with present work, an overview on the different ways of getting involved with the Atlantic Regional Initiative will be presented along with the Atlantic Data Handbook - an interactive e-book aiming to give an overview of the Regional Initiative, providing insights on workflows procedures as well as a first intro on data, tools and applications in the context of new the Atlantic to new users.
The Atlantic Ocean borders nine ESA Member States (Portugal, Spain, France, Ireland, United Kingdom, Belgium, The Netherlands, Germany, Denmark, Norway) and one ESA Cooperating State (Canada). It is therefore a key interest for ESA to focus its space applications portfolio on the Atlantic region, including also established privileged European relations with Brazil and South Africa, and potentially strengthening partnerships with the U.S.A. and other Atlantic countries (e.g. Iceland, Morocco, Nigeria, Caribbean States, Mexico, Argentina. It is also to be highlighted that challenges faced by Atlantic Ocean indirectly impact also the other European seas (Arctic, Baltic, Mediterranean and Black sea) and the Member States bordering them.
The Atlantic Regional Initiative is therefore of global nature per se and it requires proper engagement of the relevant stakeholders and adequate strategies to address the mid to long term developments of various economic activities. There are several active fora in place to support regional approaches for managing Atlantic related activities. These fora work to address the various challenges within the regions, including sustainable management of the environment (e.g. water quality, river sediment, marine pollution, toxic waste, land degradation), urban development, economic development, resource management and cultural heritage.
The need to embed and exploit satellite EO within these regional-level activities is a priority which require a specific focus on customised processing of the EO data, fusion of diverse datasets, modelling capabilities, outreach of EO capabilities to regional actors, etc.
It is also recognized that recent ICT developments are enabling a step-change in the generation of EO-based information services and the fusion of EO derived information with other datasets, using various models and analytic tools. At present, the Copernicus Data and Information Access Systems (DIAS) and the ESA-funded Thematic Exploitation Platforms (TEPs) are supporting wider use of this technology, encouraging also private actors to make extensive use of them in the frame of the Atlantic Regional Initiative, in addition to other commercial and institutional/academic innovative capabilities.
This session will showcase the ESA Atlantic Regional Initiative roadmap and main activities.
Satellite-based Earth Observation (EO) has the potential to provide unique data to support the design and operation of offshore wind farms. While wind resource data is fundamental to assess the energy production potential of an offshore wind farm, other environmental conditions (rain, waves and currents) are also fundamental parameters to consider during the farm design and installation and maintenance operations planning. Aimed at demonstrating the potential applications of EO to support offshore wind projects and increase their uptake by the sector, the Atlantic Regional Initiative Applications - Topic 2 - Offshore Wind Energy (ARIA2) is an ambitious project funded by the European Space Agency (ESA).
With activities initiated in September 2020, the ARIA2 consortium is led by Deimos in collaboration with DTU Wind Energy, WAVEC Offshore Renewables, EDP Centre for New Energy Technologies (EDP CNET), the Portuguese Sea and Atmosphere Institute (IPMA), and the Atlantic International Research Centre (AIR Centre). Its main technical objectives are the development, integration and delivery to the end-user community of a set of EO-based services to support decision making processes by the wind energy sector in the design and operations planning of offshore infrastructures in the Atlantic Region. A comprehensive user impact assessment will follow a set of demonstration exercises to evaluate how the information brought by these services can be integrated into end users current activities.
ARIA2 services cover the following elements: a) Climate and weather-related information to support wind farm design and operations; b) Assessment of wind resource for energy production; c) Assessment and minimization of Wind Turbine Wake Effect; d) Assessment of Rain Erosion of Wind Turbine Blades. Services are integrated leaning on the capabilities provided by services4EO, the Exploitation Platform developed by Deimos, which offers a comprehensive solution for the quick generation and deployment of EO-based applications.
The designed services are mostly based on the fusion of large volumes of EO data from the Sentinel missions and other European EO missions as well as other Copernicus and ECMWF datasets and encompass two different levels:
- Data services, generating long-term EO based met-ocean data series (wind, precipitation waves and current), and short-term weather forecasts (wind and waves) that will be the basis for all implemented downstream services.
- Downstream Service chains that use the data services to analyze operations and maintenance scenarios using industry standard indicators: Wind Resource Assessment; Wake Effect Assessment; Rain Erosion Assessment; Long Term Operations Windows, Short Term Operations Risk.
The first stage of the project was the collection and consolidation of technical requirements in a co-design process with a wide range of key stakeholders, covering different points in the value chain of the Atlantic Region wind energy sector, namely: a) EDP Renewables, a global leader in the renewable energy sector and the world’s fourth-largest wind energy producer, currently present in 14 markets; b) Ocean Winds, the result of a 2019 joint venture by EDP Renewables (EDPR) and ENGIE; c) London Offshore Consultants Ltd. (LOC Group), a premier international marine and engineering consulting firm consisting of LOC, Longitude, Innosea and JLA (John LeBourhis) operating in the shipping, oil & gas and renewables sectors; d) CATHIE Associates, also a consulting firm that delivers geological, geospatial, geophysical and geotechnical engineering solutions for a wide range of offshore and nearshore industries, including wind, oil and gas, marine energy, ports and subsea cables, and e) Plataforma Oceánica de Canarias (PLOCAN), a joint initiative between the Spanish and the Canary Islands governments, with the support of the European Regional Development Fund, with an extensive experience in the fields of Ocean and Wind Energy research and development activities.
A first round of co-design workshops with offshore wind energy stakeholders – EDP and Ocean Winds, through its Mooray East and WindFloat Operations teams, CATHIE Associates, LOC Group and PLOCAN - provided a total of 66 formal user requirements, describing a wide range of high level needs used by the project team to drive the services development. Key requirements coming from stakeholders included: a) the capability to show wind and wake effect assessment service outputs using standard industry indicators such as Annual Energy Production (AEP), Operational Expenditure (OEP), Capital expenditure (CAPEX) or Levelized Cost of Electricity (LCOE); b) need for a complete and reliable dataset to provide reliable wind information over a large geographic area with, at least a yearly update or twice per year; c) the system should provide wind assessments with a 100m spatial resolution; d) the services should comprehend a time series which is similar to the lifespan of the wind farm (20-25 years); e) benefit of receiving reports instead of the raw data or conveying that information in a geoportal; f) in the definition of maintenance operations windows, EO data should supplement the short-range forecast to improve results in even shorter time scales that could be useful.
After the development and integration of the defined services the project will deliver, starting January 2022, their outputs to the Resource Assessment and Operations teams of the WindFloat Atlantic (Portugal) and Mooray East (Scotland) offshore wind farms. These demonstrations shall allow users to evaluate the services and provide feedback to inform their further evolution.
The Windfloat Atlantic project was the main driver of the collected user requirements and will be the main test and validation area for ARIA 2 services. The project is managed by the Windplus consortium, which is jointly owned by EDP Renewables (54.4%), ENGIE (25%), Repsol (19.4%) and Principle Power Inc. (1.2%). Windfloat Atlantic demonstrated an innovative technology to the exploitation of wind potential at sea, at depths of more than 40m, using the experience of the oil and gas industry to support multi-MW wind turbines in offshore applications. The three turbines that comprise the Windplus consortium’s wind farm were mounted on floating platforms anchored to the seabed to collectively deliver a total installed capacity of 25 MW (8,3 MW each). Additionally, this technology has great advantages that make it more accessible and affordable, including its assembly by standard onshore cranes on dry land (at the port) and the use of common maritime transportation methods, such as tugboats, instead of expensive offshore installation vessels.
Windfloat Atlantic currently uses in situ sensors which collect data coming from the platforms and the wind turbines, lacking in spatial resolution since it only reflects point conditions, considered by the user to be one of the important aspects concerning the performance of data collection and analysis, along with response and update times, accuracy, and availability. Apart from that, meteorological forecasts from a private provider are used to manage daily maintenance operations. This might present an opportunity for satellite derived information as the main drivers that define the requirements for collecting and analyzing environmental data were indicated to be daily operation management. While the Windfloat team is satisfied with the results from their current data practices, considering them able to address their operational requirement (even though not completely), it has pointed out the demand for software configuration every time an interface is needed as being a major limitation on their current access to external data or on data provision from external entities.
While the project is now under the operation and maintenance phase and mostly needs site specific data (i.e. at the wind farm location) the Windfloat team is interested in the full range of ARIA2 services, including those for site assessment that shall be compared to its current methods. Preferences to deliver the information through a web-based geoportal and in geographic information analysis format were considered for the solution implementation. Co-design has also indicated that ARIA2’s regional approach could greatly improve users ability to discover and analyze EO information in an agile and uncomplicated way.
A second pilot demonstration shall be performed within the Mooray East offshore wind farm, which comprises a total of 100 turbines for a total installed capacity of 950 MW, located some 22 km off Aberdeenshire in Northern Scotland. The design and construction of Mooray East, which had its last turbine installed in September 2021, represents a step change in both cost (around EUR 67/MWhr) and scale, as it is to be the largest wind farm in Scotland (and second largest in UK) when it starts operations in 2022. In this case only services related to the operational phase will be demonstrated and evaluated by the Mooray East team: Long Term Operations Windows and Short Term Operations Risk.
During the first half of 2022, ARIA2 will deliver its services operationally in both the WindFloat and Mooray East wind farm projects. Detailed assessment of the services quality and uptake levels shall be carried out, considering the user requirements provided by the offshore wind energy sector. The ability of the regional approach to EO based services to comply with those will be evaluated based on this first set of results.
An influx of wastewater or stormwater can have harmful effects on coastal ecosystems. It also poses a risk to human health if the effluent or runoff enters the ocean near beaches or other popular recreational areas. [1] Identifying and mapping the location and extent of such plumes are key aspects in monitoring the pollution of coastlines. Until recently, water quality was mainly monitored by in situ samplings, which are infrequent, strenuous and expensive. Therefore, the use of remote sensing could provide a means to fill in the temporal and spatial gaps left by field measurements [2].
When using remote sensing techniques to detect wastewater or stormwater plumes, imagery from satellite-borne Synthetic Aperture Radar (SAR) or optical sensors proved to be most useful. [1,5,7].
VV-polarized SAR imagery provides high-resolution ( < 100m spatial resolution) active microwave observations of sea-surface roughness (SSR). SSR is influenced by wind, interactions of waves and currents, and the presence of surfactants on the ocean surface. Surfactant films, such as oil slicks and algae in nutrient plumes, dampen these capillary and small gravity waves and thus reduce radar backscatter. Observed variability in SSR can differentiate nutrient plumes from the surrounding ocean because of their high surfactant content. [3] Nutrient plumes should therefore appear darker on SAR imagery than the clean, open ocean. However, at very low wind speeds ( < 2-3 m/s), there's very little backscatter, so surfactants cannot be distinguished from the smooth ocean and the SAR imagery looks dark. At high wind speeds ( > 7m/s) the surfactants are dispersed and mixed into the upper ocean, leading to large bright areas on the SAR imagery. [1,6]
Satellites with optical sensors image the Earth in the visible, near infrared (NIR) and short-wave infrared (SWIR) range. Upwelling wastewater plumes entrain colder bottom water to the surface resulting in a lower sea surface temperature (SST) than that of the ambient ocean. Nutrient plumes include suspended solids (SS), coloured dissolved organic matters (CDOM) and nutrients, which are measurable by optical sensors, as they alter the inherent optical properties (IOPs) of the water. Chlorophyll-a (Chl-a) is used as a proxy for phytoplankton biomass in the surface ocean and strongly affected by the amount of nutrients in the water. Thus, SST, SS, CDOM and Chl-a, measured by optical sensors, give an indication of the plume's location and extent. [2, 4]
The Atlantic Regional Initiative is one of ESA’s Regional Initiatives under development in the Earth Observation Programmes Directorate. Its objective is to work with end-user communities in cooperation with national and regional authorities to develop beneficial Earth observation (EO) applications and services within three priority domains: Blue Economy, Renewable Energy (Off-shore Wind) and Atlantic Cities and ports. In the project “Atlantic cities and ports”, Deimos is analysing with University of Portsmouth, the possibility to use Sentinel2 and Planet indicators with machine learning to detect wastewater plumes in the Solent region (South of England), correlating this information with events of wastewater discharge (generally due to storm events). These indicators include chlorophyll-a concentration, suspended matter concentration and turbidity, which altogether help to identify the presence, extent, duration and behaviour of sewage plumes in intertidal areas. The objective is to demonstrate that the platform helps with the analysis of past events (discharges) for the touristic periods (April-August) of years 2017-2019. A definition of triggering thresholds for near real-time alarm generation on new events will also be attempted. Such alarm would be helpful to warn swimmers, fishers, or actors involved in aquaculture activities. It is however challenging because of the current delays between satellite data acquisition and issue of the warning.
References:
[1] Holt, Benjamin, Trinh, Rebecca, & Gierach, Michelle. (2017). Stormwater runoff plumes in the Southern California Bight: A comparison study with SAR and MODIS imagery. Marine pollution bulletin, 118(1-2), 141–154. https://doi.org/10.1016/j.marpolbul.2017.02.040
[2] Gierach, Michelle & Holt, Benjamin & Trinh, Rebecca & Pan, B. Jack & Rains, Christine. (2016). Satellite detection of wastewater diversion plumes in Southern California. Estuarine, Coastal and Shelf Science. 186. 10.1016/j.ecss.2016.10.012.
[3] Svejkovsky, Jan & Jones, Burton. (2001). Satellite imagery detects coastal stormwater and sewage runoff. Eos, Transactions American Geophysical Union. 82. 621-621. 10.1029/01EO00357.
[4] Trinh, Rebecca & Fichot, Cedric & Gierach, Michelle & Holt, Benjamin & Malakar, Nabin & Hulley, Glynn & Smith, Jayme. (2017). Application of Landsat 8 for Monitoring Impacts of Wastewater Discharge on Coastal Water Quality. Frontiers in Marine Science. 4. 329. 10.3389/fmars.2017.00329..
[5] Gholizadeh, M., Melesse, A., & Reddi, L. (2016). A Comprehensive Review on Water Quality Parameters Estimation Using Remote Sensing Techniques. Sensors (Basel, Switzerland), 16.
[6] Digiacomo, P. M., Washburn, L., Holt, B., & Jones, B. H. (2004). Coastal pollution hazards in southern California observed by SAR imagery: stormwater plumes, wastewater plumes, and natural hydrocarbon seeps. Marine pollution bulletin, 49(11-12), 1013–1024. https://doi.org/10.1016/j.marpolbul.2004.07.016
[7] Gancheva, I., Peneva., E., Slabakova, V. (2021). Detecting the surface signature of riverine and effluent plumes along the Bulgarian Black Sea coast using satellite data, Remote Sens. 2021, 13(20), 4094; https://doi.org/10.3390/rs13204094
Through initiatives such as the European Commission and European Space Agency’s Copernicus program, open access tools, and widespread access to computing facilities such as cloud computing platforms, Earth Observation (EO) is made available as an analysis and decision-making tool to broad user communities in science, society, and industry. Through these developments the circle of users is expanding at a fast pace, and thereby the need for knowledge transfer is growing rapidly.
With the objective of promoting global networking in EO education, the EO Connect project (funded by the German Ministry of Education and Research) addresses this demand around the United Nations’ Sustainable Development Goal 2 (SDG#2): Zero Hunger. EO Connect currently brings together around 20 different stakeholders from universities, research institutions, space agencies, and international organizations in a joint effort towards a Zero Hunger MOOC on the contribution of Earth Observation to the mitigation of world hunger.
The Zero Hunger MOOC will transfer the expert knowledge of stakeholders from relevant fields to the users through the dedicated online EO education platform EO College. Involving theoretical background knowledge and practical examples, as well as hands-on tutorials, the MOOC covers a wide range of topics, methods and resources around SDG#2 and addresses challenges considering the different food supply practices of agriculture, livestock, forestry and fishery. However, knowledge about their EO-based solutions and about the role of EO to achieve zero hunger must be communicated to an extensive variety of learners with different skills and objectives. , Therefore, modern learning concepts such as gamification and adaptive learning with micro contents and drip feeding are implemented. This way, the MOOC content is delivered continuously in small chunks and learners are enabled to follow their individual learning paths suiting their own needs. Thus, the Zero Hunger MOOC achieves effective knowledge transfer in spite of the complex setting of diverse topics and learner backgrounds.
With the launch of a sequence of Massive Open Online Courses (MOOCs) that were published in the course of the ESA financed 'MOOC4Land' project, the EO College provided innovative learning resources to conquer a variety of land surface remote sensing applications and provide essential theoretical knowledge about Earth observation-related principles. The project aims at supporting capacity-building activities in developing countries by focusing practical courses on the respective areas of interest. In cooperation with our partners from institutions from Slovenia, Netherlands, United Kingdom, and Germany, we created a two-component course outline, providing necessary remote sensing knowledge for modern land surface-based real-life scenarios.
In a first, theory-based, MOOC we provided fundamental eLearning materials to understand the underlying principles of optical and microwave remote sensing. There, we presented and explained topics ranging from the essentials of image acquisition over data (pre)-processing to image analysis techniques, enabling users to make full use of freely available remote sensing data products. Secondly, we published a series of courses that cover thematic applications of land surface remote sensing from urban land cover over land degradation to the mapping of wetlands. These less extensive but more practical MOOCs, cover a variety of environmentally relevant topics and thus can provide substantial support in the capacity-building of countries where this knowledge is needed through an open-data and open-knowledge policy. These practically focussed courses were supported by external experts from various institutions to provide the best possible hands-on experience for students.
While offering an overview of open-source algorithms and working with freely available data sets, the MOOCs published in the course of the MOOC4Land project provide a substantial basis for students, public authorities, and other interested parties to work on relevant environmental challenges over land surfaces. Through the usage of innovative features such as very high-quality tutorial videos, interactive animations as well as explorable H5P elements in combination with peer-learning and forum-based support functionalities these courses can provide an important addition to remote sensing knowledge transfer on a global level. With our focus on African countries, we aimed at delivering remote sensing-based solutions to global problems on a local level. The results of the course sequence will be statistically analyzed to provide more insights into user needs and further improve the quality of earth observation-based eLearning procedures.
About every six days, most areas on Earth can be imaged from the European Space Agency’s SAR Sentinel-1 program. In the coming years, it may reveal tiny changes on every patch of earth daily. Unlike optical technology which produces the best images on sunny days, Sentinel-1 takes its snapshots actively, by using radars, penetrating clouds, and working at night. Comparing SAR images from the same position at different times can reveal surface movements with millimeters accuracy. The technique is known as interferometric SAR, or InSAR. Over time, there are many applications in the real world. The launch of twin Sentinel-1 satellites, in 2014 and 2016, provided people with regular, freely available data to observe long-term changes. For example, Norway used Sentinel-1 data to create a deformation map of the entire country, in order to identify rockslide hazards. Inadvertently, they discovered that parts of the train station in Oslo were sinking. Private companies are also investing in SAR technology. Iceye, a Finnish space start-up, they can capture snapshots of most places on Earth several times per day and produce images with 50 centimeters or higher resolution.
The more people know SAR, the more applications can be conceived in the real world. Although there are many learning SAR resources available, it is difficult to understand, particularly for new people. There is a strong need to create an interactive social forum where they can ask for help, and develop about the everything in the field. As an instructor of IGARSS 2020 and 2021 on Radar Interferometry, I received many recommendations from young researchers for creating this interactive place. To meet this, I created this open interactive group on Facebook: https://www.facebook.com/groups/radarinterferometry. Of course, I cannot solve all the tasks/problems alone. Many InSARists have joined to develop this initiate since July 2021, for example providing a reasonable reply to a question. I feel more than happy that you can join and contribute actively after reading this abstract, for example sharing your practical lessons and experiences. Finally, for demonstration, I extracted part of my lecture on “Persistent Scatterer InSAR time series”, made it available on my YouTube (https://www.youtube.com/DinhHoTongMinh), and then shared it with the group.
Educating the public about the benefits and potentials of remote sensing is becoming more important, as fewer people are aware of how this technology already affects their everyday lives and how it is going to transform the way we see our planet as more and better data from space is collected. There are many ways to enhance the general knowledge of remote sensing principles, including the use of video resources that can be adapted to a variety of teaching scenarios.
Learning videos can be concise learning packages that offer a wide range of possibilities when it comes to presenting topics in an up-to-date way and disseminating them in a variety of learning environments. Videos are already a primary source of information in our society; however, few examples convey earth observation topics. Over recent decades, numerous studies established guidelines on how educational videos can be successful. We created a workflow that makes use of these guidelines in the creation of learning videos covering the basic concepts of earth observation. The target groups for these videos are secondary students as well as anyone interested in learning about earth observation. Two of those videos, one on earth observation in general and one on the basics of the electromagnetic spectrum, were used in this research. To test whether or not the workflow leads to effective learning videos and to compare them to traditional text and illustration material derived from those videos, a pre-test/post-test study was undertaken focusing on German pupils in their final year at secondary school as well as first-semester university students. Due to the special circumstances faced during the COVID-19 crisis, this experimental setup used a combination of online questionnaire tools and a web environment. The results show that both methods were effective resources that led to a significant increase in knowledge—raising the test results by 21% for the video and 13% for the text and illustration group. The presentation will illustrate the workflow and show the results of the online study.
In order to bring satellite remote sensing data, such as the ESA Copernicus data and services, permanently and widely into practical use, it is necessary to provide inexperienced users with basic knowledge. This enables them to understand the potential of the data and thus to develop acceptance for new technologies and methods. Within the framework of the knowledge transfer project SAPIENS (Satellite Data for Planning, Industry, Energy and Nature Conservation), funded by the Helmholtz Association, target group-oriented training courses are being developed. In close exchange with users, training needs are identified, and basic knowledge is made available in the form of interactive, live digital training formats. Through an engaging teaching style, practical exercises and direct feedback and support during training sessions, SAPIENS aims to enable inexperienced users to work with the data in the long term and to develop fields of application independently.
Satellite remote sensing data are more than just beautiful pictures of our Earth. They are full of information and enable us to better understand the Earth's surface and its processes. This makes satellite data interesting not only for science, but also for users from a wide range of fields such as agriculture and forestry, nature conservation, planning and public authorities. And thanks to the increasingly free data policies of the space agencies, highly up-to-date spatial environmental information, e.g. from the Copernicus satellite program of the European Space Agency (ESA), is available free of charge for everyone.
Although the amount of freely available remote sensing data is growing, this data is still poorly integrated into everyday work and decision-making processes. Skepticism and a lack of understanding towards the potential and actual applications of satellite remote sensing data still seem to be quite common. Furthermore, the inhibition to use satellite data and integrate it into work processes is high for many inexperienced users. Neither access to the data nor its use are self-explanatory. Existing training materials are mostly on a scientific level and are usually written in English. The situation is similar for digital training opportunities. The number of digital training courses in the field of satellite remote sensing and Earth observation is steadily increasing; this development has been pushed by the Covid-19 pandemic. However, the content is often prepared by researchers for experts and motivated by research topics, but not necessarily user needs.
To close this gap between scientists on the one hand and end users on the other hand, the Helmholtz Association is funding the knowledge transfer project SAPIENS. Within the framework of the project, target group-oriented German-language training courses are being developed. In close exchange with users, training needs are identified and the knowledge is made available in the form of interactive digital training formats. An agile and modular workshop format was developed for this purpose. Within a workshop module, the learning content is conveyed in small learning blocks. Thus, the knowledge is imparted in manageable portions and the participants are not overloaded with new content. The core of the trainings are hands-on exercises in which the participants experience the practical handling of the data. In order to ensure intensive supervision during these exercises, the training courses take place in small groups (~ 15 participants) and a minimum of 3 supervisors.
So far, four SAPIENS modules have been developed:
- Basics & Use Cases: Module 1 explains the basics of multispectral remote sensing (using the example of Sentinel-2) and the potential of satellite data is illustrated by real application examples using expert interviews.
- Visualization and Analyses: Module 2 is about the elementary use of a GIS for the visualization and analysis of multispectral satellite data (Sentinel-2).
- Data Access: Modul 3 introduces data portals and explains how to search, download and open Sentinel-2 data.
- Change Analyses: Module 4 shows which data are suitable for change analyses and which tools can be used to make changes visible over different time spans.
For each module, a detailed manual and video material is offered in order to facilitate independent learning also beyond the live online trainings. The live online trainings and all materials are free of charge and available to all interested persons.
At the time of submitting this abstract (mid-November 2021), we have conducted modules 1, 2 and 3, and the demand for the trainings is high. The first training series in November 2021 is fully booked (> 200 registrations), and further training courses will be offered in spring next year. First statistics show that especially people from environmental authorities and nature conservation foundations (24%) participate in the trainings, but also many actors from forestry (20%) and agriculture (13%). In addition, the user group Other (25%) is interesting to examine in more detail after the training has been completed.
From our point of view, the great demand for trainings in the context of satellite remote sensing shows an increasing interest in innovative methods and at the same time practical know-how, which must be offered in a target group-specific way.
In addition to SAPIENS, there are other remote sensing knowledge transfer projects at the GFZ with a focus on digital training formats for external target groups. More information about our projects and education initiatives can be found on the FERN.Lern web platform (https://fernlern.gfz-potsdam.de/).
Nowadays, satellite monitoring and geospatial intelligence are the drivers of digital transformation and economic development all over the world. At the same time, in Ukraine there is no higher education programs dealing with Earth observation data science or machine and deep learning on remote sensing data. In 2019, Space Research Institute in cooperation with the Department of Mathematical Modeling and Data Analysis (MMDA department) of National Technical University of Ukraine “Kyiv Polytechnic Institute” (NTUU “KPI”) joined the Copernicus Academy network for deeper involvement into educational activities related to the Copernicus program. As Copernicus Academy laboratory, we contribute into international scientific and innovative international programs, provide trainings and master classes for students, regional administrations and teachers. Most of our projects deal with machine learning on satellite and auxiliary data and satellite monitoring applications and require deep knowledge of mathematics, machine learning and data analysis.
To facilitate involvement of students into our projects, in 2021 MMDA department established a certificate program “Models and methods of intellectual analysis of heterogeneous data” for master students of Applied Mathematics specialty (https://mmda.ipt.kpi.ua/en/certificate-program-models-and-methods-of-intellectual-analysis-of-heterogeneous-data/). It includes big geospatial data analysis, geospatial information technologies and deep learning for satellite and heterogeneous data. It allows students to dive into Earth observation domain and bridge the gap between applied mathematics and satellite monitoring. Students do their master’s research within international projects, in particular, Horizon-2020 e-shape or NASA project “High-Impact Hot Spots of Land Cover Land Use Change: Ukraine and Neighboring Countries”. They develop machine learning models for different applications based on Copernicus data and implement them on different cloud platforms, such as GEE, CREODIAS and AWS. Some of them develop their startup projects based on this research.
For further development of our program and better motivation of our students we are interested in collaboration with similar programs for academic mobility of students and professors and looking for innovative educational forms and resources.
Science lessons should not only convey learning content in an understandable and well-structured way, but also motivate and arouse interest, create relevance through references to authentic contexts, initiate an adequate understanding of science and, last but not least, promote young scientists. Yet, a number of studies in recent years have shown, this does not always succeed in the desired way. Extracurricular science labs at universities and research institutions are intended to complement and enrich school lessons in this regard. These facilities have become quite common in German speaking languages and have been shown to promote interest in and understanding of the natural sciences. In this context, authenticity that can be experienced there in a special way is often mentioned as an important characteristic that influences the success of these extracurricular learning locations. However, there is a need for research into which design aspects of authentic learning settings - especially extracurricular science labs - are beneficial for the stated goals.
The quasi-experimental intervention study described below had a 2x2-factorial pre-post-follow-up design with control group. The learning setting took place at different authentic learning locations using different authentic laboratory equipment. For this learning setting, a workshop on optical environmental remote sensing was developed and carried out with a total of 166 students in grade ten. An extracurricular science lab of the German Aerospace Center (DLR) was chosen as the authentic learning location, and the school of the students as less authentic. On the one hand, high-end laboratory equipment, which is generally only available to research institutions, was used at both locations. On the other hand - as a less authentic alternative - simplified, low-cost devices were used that are also available for schools. These were, in particular, newly developed sensors for remote sensing of vegetation that are suitable for teaching purposes. Using questionnaires, the effect of the learning settings on situational interest, perceived authenticity and content relevance, physics-related job expectations and specific specialist knowledge were examined.
Regardless of the place of learning (extracurricular science lab or school), working with the high-end laboratory devices with a large effect size led to a significantly higher perception of authenticity. In contrast, content relevance was perceived more strongly in the extracurricular science lab than in school, regardless of the laboratory equipment (high-end or low-cost). The difference was statistically significant with a medium effect size. For the situational epistemic interest, both factors of learning location and laboratory equipment proved to be statistically significant. This interest component, which among other things reflects the desire to learn more about a topic, was greatest in the group that was able to work with high-end laboratory equipment at the extracurricular science lab. After the intervention, regardless of the place of learning, a significant increase in physics-related job expectations could be observed in the groups with the high-end laboratory equipment. However, the effect was small. Only in the group at the extracurricular science lab did it remain statistically significant even after 6-8 weeks. In all treatment groups, a significant increase in specialist knowledge was demonstrated with a large effect size. This largely persisted even after 6-8 weeks. Overall, this increase was independent of the place of learning and the laboratory equipment. With a view to subject knowledge which is applicable in simple contexts, however, significant differences with a medium effect size became apparent. This was stronger pronounced in the groups with the less authentic low-cost laboratory devices compared to the group at the extracurricular science lab with the high-end laboratory devices.
The results support the claim that the particular authenticity of the learning setting plays an important role in the success of extracurricular science labs. The situational epistemic interest and the perceived content relevance are seen in close connection with the development of a long-term individual interest. Accordingly, a promotion of these variables is an indication of a particularly sustainable and long-term effect, which is apparently more to be expected in a highly authentic learning setting than in a less authentic one. The physics-related job expectation, which is usually rather weak among German students, can also benefit from a high level of authenticity, especially the laboratory equipment used. In order to increase learning success, however, simpler instruments do not necessarily have to be a disadvantage, but can even represent an advantage in terms of the applicable specialist knowledge learned. The newly developed low-cost remote sensing sensors used in the study can also be used in schools without any problems. Therefore, the visit to the extracurricular science lab can be well embedded in long-term teaching projects. The sensors were calibrated with a field spectrometer and the data were then validated using highly accurate satellite data. The data are consistent with a high level of accuracy and suitable for the intended educational purpose. In the context of the present work, tools for data analysis and teaching materials in the form of tutorials and workbooks were developed, too. Together with the camera sensors for the first time they make remote sensing of vegetation according to the normalized difference vegetation index (NDVI) accessible in physics lessons in one's own environment.
EOCare: a Cloud-based solution to support EO training activities
Brice Mora1, Eric Guzzonato1, Sylvie Remondière2, Francesco Palazzo2
CS GROUP, France1, Serco, Italy2
The relevance of information derived from Earth observation (EO) data has been demonstrated for wide scope of applications such as those supporting United Nations conventions or some of the Sustainable Development Goals. In parallel, the need for an EO-skilled work force has steadily grown. However, training people in the field of EO requires powerful and up-to-date processing environment and software suite. Such needs have become more stringent with the advent of Sentinels constellations that opened the range of applications thanks to the wealth of data they provide. Concomitantly, new artificial intelligence algorithms have demonstrated their potential to extract meaningful information from large time-series datasets, over large areas. Furthermore, new datasets and methods become available at an unprecedented pace. As a result, the capacity to develop and maintain up to date EO training curricula and facilities becomes more complex.
In this context CS GROUP and its partner Serco Italy propose the EOCare offer which consists in the provision of resources to perform training activities in the field of EO image processing. The EOCare service is operated by a consortium that has a proven experience in performing EO training thanks to its experts, and efficient ICT environments. The EOCare offer is two-folded with first, possible support to external training activities. In such a case, the Customer is responsible for organizing the training event while EOCare only provides the virtual environments and ICT support. The second option consists in offering an end-to-end solution for either face-to-face or remote training events. In such a case, EOCare provides not only the ICT resources and support but handles the curriculum of the session and provides trainers.
CS GROUP has a strong and recognized expertise in big data and distributed cloud computing technologies, developing its own suite of tools based on open-source technologies. This experience also encompasses Helpdesk management. Serco Italy has extensive experience in supporting EO community scientists and pre-operational users to transfer EO expertise and knowledge through innovative training programmes. Both partners have been working together on operating the Copernicus Research and User Support (RUS) Service on behalf of the European Commission and the European Space Agency.
Given the sanitary crisis that the world is currently being faced with, the training and capacity development stakes are high. Across Europe thousands of people from the academic, research and teaching world found them-selves suddenly confined from one day to the next without always having access to the technological and scientific means that could allow them to continue their work, teaching and other distance learning. In this context, the various services offered by the EOCare service have already proved to be effective alternatives to overcome this isolation and allow everyone to continue exploring the Copernicus Space program.
The objective of this communication is to present the different aspects of the EOCare service, particularly the training offer (range of EO datasets and applications covered), the Cloud-based virtual environments that can be hosted on any DIAS or major Cloud provider, and how such environments can be tailored to the needs of the training events (software and tool suite, processing resources, working environments etc.).
In today’s rapidly developing earth observation ecosystem, new data sources, software and platforms mean that students and practitioners have to continually learn how to use them. Google Earth Engine is one such geospatial cloud-based platform which has attracted much user interest for research, nonprofit and commercial applications. However, users may have to first learn how to code in JavaScript or Python, as well as learn how to modify familiar workflows for a client-server and parallel computing paradigm. Learning communities where users share knowledge, mentor each other and help each other troubleshoot are useful for enabling learning-while-doing, especially when a steep learning curve is involved.
This talk introduces two examples of fostering user-led learning communities around Earth Engine. The first example is an in-person user group at Yale University called EE@Yale (https://eeyale.github.io), which consisted of weekly to biweekly meetups where Earth Engine users from different departments shared talks and helped each other with learning how to use the platform. For example, some sessions at EE@Yale were focused on new developments in Earth Engine such as the ability to require script modules and create applications with a user interface. After the talks, users were able to try out these new features together while troubleshooting as a group. These meetups ran between 2017 and 2020, moving into a virtual format in 2020.
The second example is a series of Earth Engine Virtual Meetups (https://sabrinaszeto.com/earth-engine-virtual-meetup-calendar) held during the pandemic which saw an international group of users come together for an hour once a month for user-led talks and open knowledge sharing. After the guest speaker has shared their presentation, an open question-and-answer session is facilitated by the moderator. The sessions are also livestreamed, recorded and uploaded to a YouTube channel. The virtual meetups have between 10 and 100 attendees each time and have also played a role in allowing for networking and sharing work during the pandemic.
In addition to introducing these two learning communities and how they were formed, this talk will also share best practices and lessons learned regarding stakeholder engagement and co-creation, motivating users and fostering a culture of learning from each other. Several templates for meetup events will also be described, including lightning talks (short talks that last up to 5 minutes), co-working sessions, mini-hackathons as well as the sessions with a more traditional “lecture followed by question and answer” format. A behind-the-scenes look at useful tools and resources such as graphic design software and polling websites will also be shared. The lessons learned from these user communities can be applied to facilitating working groups or meetups around other earth observation tools or data.
InnEO'Space PhD: Preparing Young Researchers for a successful career on Earth Observation applications
Josiane Mothe1, Aurélie Baker2, Valentina Castello3, Valentina Ciaccio3, Fabio Del Frate4, Davide De Santis4, Mihai Ivanovici5, Johan Leduc1, Daniela Necsoi5, Aude Nzeh Ndong2, Nathalie Neptune1, Maude Perier-Camby2, Antoaneta Petrache1, Marco Recchioni3, Zia
Ullah1, Mihaela Voinea5
1 Université de Toulouse, IRIT CNRS UMR5505, Toulouse, France
2 Aérospace Valley, Toulouse, France
3 ReMedia Group, Rome, Italy
4 University of Rome “Tor Vergata”, Rome, Italy
5 Transilvania University of Brasov, România
E-mail: Josiane.Mothe@irit.fr
Abstract. Thanks to both data availability and advances in Artificial Intelligence (AI) and/Machine Learning, the Earth Observation (EO) data market has increased significantly over the past few years. Many markets and industries can now benefit from it, and therefore need qualified employees to manage EO data. InnEO’Space PhD programme aims to prepare young researchers for a successful career, around the EO field, through innovative learning methods. The main idea is to develop modernised and transferable PhD courses to enhance cross-domains skills both technical (AI for EO) and soft skills (entrepreneurial, project management, team building, and effective communication), to raise awareness about employment needs and opportunities, both in academia and industry. The means for achieving all these objectives are SPOCs (Small Private Online Courses). It is a specific type of asynchronous e-learning course, which allows educational activities to follow the digitalisation requirements (emphasised by COVID-19 crisis) and accessibility whenever and wherever the students need. The instructional strategy is based on the flipped classroom theory, a type of blended learning that combines online educational materials and physical presence of both teacher and student, focusing on the students’ engagement, and fostering interactions. InnEO Space PhD SPOCs will be divided in different modules, containing micro-units (10-15 minutes each), and will give access to ECTs (European Credit Transfer and Accumulation System) for those students who would achieve them.
1/ Introduction
The Copernicus programme1, led by the European Commission and implemented together with European Space Agency (ESA), has created a major disruption in EO services. That made it possible to create new services in various downstream markets and industries, such as agriculture, changes detection, meteorology, pollution, environment, natural resources, climate change…
The massive data sets coming from those satellites can be analysed and processed efficiently thanks to the Artificial Intelligence and Deep Learning. Those are parts of the reasons why the EO data market is increasing and is expected to support more than 12000 jobs per year2.
Three top European universities in the EO domain (Université Jean Jaurès in Toulouse (FR) through IRIT laboratory, Universitatea Transilvania din Brasov (RO) and Università degli Studi di Roma “Tor Vergata” (IT) are gathered within InnEO’Space PhD European project3, with the objective of helping students to improve their employability. The team is completed by one cluster that gather the main local aerospace economic actors (Aerospace Valley (FR)), and an international company expert in communication and e-learning products in EO (Remedia Italia (IT)).
InnEO’Space PhD aims to develop innovative contents which will consider the employers’ need. FabSpace4 is a recent experience teaching valuable lessons. The Small Private Online Course - asynchronous- allows educational activities to follow the digitalisation requirements emphasised by COVID-19 crisis and accessibility whenever and wherever the students need. Those SPOCs follow the flipped classroom theory, combining online educational materials and physical presence of both teachers and students, fostering interactions.
1https://www.copernicus.eu/en/about-copernicus/
2Copernicus Market report Nov 2016 (Copernicus.eu/)
3 Grant agreement ID: 101006275 https://inneospace.eu/
4 Fabspace, Grant agreement ID;101006275, is an innovative network based on EO data analysis within universities https://www.irit.fr/FabSpace/
2/ Skills to be developed
2.1 Entrepreneurship and soft skills
Entrepreneurship and soft skills were developed first in the InnEO Startech and InnEO summer school.
Within InnEO’Space PhD, the Startech pilot is destined to European PhD students. The Startech concept was first launched by WSL in Belgium in 2012 to help students from engineering schools to enhance their entrepreneurial skills. It has been adapted to the Earth Observation case within the H2020 FabSpace 2.0 project.In the InnEO Startech, students work in groups on an idea of an innovation (e.g., a new application that uses EO data to help monitoring the gas pollution in the seas). Led by several business coaches, participants attended 8 collective coaching sessions online (due to COVID-19 crisis) such as value proposition, market segments, pitch preparation, or customer relationships. At the end of the training, the students pitched their project in front of a jury composed of experts (videos are available at FabSpace - YouTube ).
The InnEO Summer School was designed for a full week, comprising both high-level scientific lectures (see below) and soft and transferable skills development that foster. The targeted group was PhD students to help them preparing their future career in academia or industry.
2.2 AI for EO
At global scale, climate change is the issue of our time, and we are at a very crucial moment where crucial decisions need to be taken about how to modify the various processes causing it. For this topic, Earth Observation can be especially important thanks to the possibility of monitoring the impact of anthropogenic activities on the environment.
EO went through a significant evolution in last decades, due to significant advances in sensor and digital technology.
The result is a real wealth of information that might be difficult to manage, which makes techniques based on AI more and more important. With those, there is also an urgent need for experts able to navigate across and provide solutions to the issues connected to the application of AI and ML.
The “AI for EO” skills were provided to the students during the InnEO Summer School. Example of topics are the latest AI models (ML, Deep Learning) and Remote Sensing with digital EO applications.
3/ Toward SPOCs
The digitalisation of training and educational activities triggered by the digital revolution has been accelerated during the pandemic, especially universities, schools, and research centres. Therefore, it was necessary to analyse and adapt the various existing e-learning methods and tools. That is what brought the research group to deepen the SPOC (Small Online Private Courses) method. It is a particular type of asynchronous e-learning course making it available for students at anytime, anyplace, online, or offline.
The main criticisms levelled against asynchronous e-learning relates to the non-centrality of the student in the learning process: this is particularly clear in MOOC courses, although SPOCs can be an exception. Thanks to the small size of the group, the fact that they belong to the same organisation or follow the same study programme give us the opportunity to integrate the course in blended learning with both the collaborative and cooperative dimension of learning and integrate moments of individual interaction with the teacher into the learning process.
The project Inneo contain two SPOC courses, covering EO and ML domains, but also soft skills such as team management, and project research. The total of the course direction for each SPOC is estimated to be around 25 hours and allows PhD student to acquire 1 ECTS (European Credit Transfer and Accumulation System) per course.
Moreover, the instructional strategy follows the flipped classroom theory, a type of blended learning, which aims to increase student engagement. The focus is centred on the students and not the teacher here, emphasising practice and exchanges during live class.
4/ Digital material description
There are 9 modules that are divided in several micro-units each. SPOCs and complementary videos will make 2 training courses of 20-25 hours each.
The modules deal with Earth Observation related topics:
• Introduction to earth observation
• Understanding SAR data
• Retrieval of geo-physical parameters using EO data
• Practical work on RGB satellite images
• Visualization of hyperspectral images
• Fractal models for EO image analysis
• Building successful teams
• Effective communication
• Open science courses
5/ Conclusion
EO, AI and ML developments give huge opportunities to companies which allows the creation of new services. A broad range of markets is concerned. The survey done by the InnEO’Space PhD confirmed the need of cross-skills between EO and AI. The programme aims to develop e-learning tools and processes such as the SPOCs, and support students in their starting career by teaching them both technical and soft skills to improve their employability.
References
[1] FabSpace 2.0: A platform for application and service development based on Earth Observation data, Marantos Charalampos, Paraskevas Iosif S, Siozios Kostas, Mothe Josiane, Menou Colette, Soudris Dimitrios, 6th International Conf. on Modern Circuits and Systems Technologies (MOCAST), p 1-4, 2017.
[2] Fabspace 2.0: The open-innovation network for geodata-driven innovation, Del Frate, Fabio and Mothe, Josiane and Barbier, Christian and Becker, Matthias and Olszewski, Robert and Soudris, Dimitrios, IEEE, International Geoscience and Remote Sensing Symposium (IGARSS), p. 353-356, 2017.
[3] Experiential learning - Experience as the source of learning and development, 2nd ed., David A. Kolb, Pearson FT Press, 2012.
[4] The Handbook of Experiential Learning, Silberman, M., San Francisco: Pfeiffer John Wiley & Sons, 2007.
The Earth Observation Satellite System Design (EOSSD) training course was developed in the frame of the ESA Academy’s Training and Learning Programme in collaboration with ARES, the Association of Retired ESA Staffs. The course covers the end-to-end design and development process of satellite Earth observation systems, and is aimed at M.S. and Ph.D. level students in science and engineering. It encompasses the system requirements definition, general system architecture, the design engineering process, remote sensing instrumentation designs, satellite design, ground segment design, operations concept elaboration, system assembly/integration and verification, launch campaign, in-orbit validation and applications overview of Earth observation data. It was delivered first in 2018 at ESA Academy’s Training and Learning Facility in Transinne (Belgium) [1], and has been improved and converted into an online format for the 2021 edition due to the Covid-19 pandemic situation.
Besides a number of improvements brought-in as a result of the lessons learnt from the 2018 edition, the major challenge faced by the course experts and instructional designers was to ensure an effective assimilation of the lecture materials by the students and to enable ample interactions among all those present in an online environment throughout the course duration. The two following measures were implemented in the course of its elaboration:
(1) Reduction of the course delivery pace by spreading the lectures and student project sessions over 10 half-days as opposed to the 2018 edition which was held in 4.5 days;
(2) Use of online conferencing tools with break-out sessions and chat rooms with different topical areas for questions and answers, with the latter made available throughout the whole duration of the course (12 days including a weekend).
As a result of those measures and improvements, the 2021 online edition received higher appreciation scores by the students and achieved a more effective learning experience which was reflected in the higher quality of the student project outputs as compared to the 2018 one.
The presentation will concentrate on the approach and logics adopted by the instructional team for elaborating the online edition. Lessons learnt from the experience of online delivery, both positive and negative, will be discussed. Prompted by the overall positive experience, a hybrid (partly online and partly physical) course format may be considered for the next edition in 2023 as well as a remote e-learning course for the future.
Reference:
[1] C.C. Lin et al., “Sentinel-3 Next Generation Strawman Mission Design by ESA Academy Students,” Living Planet Symposium 2019, Milan, Italy, May 2019.
As part of a coordinated knowledge exchange activity, data visualisation techniques have been combined with computer graphics and engaging storytelling to communicate the work of ESA’s Climate Change Initiative (CCI) to the public, policy makers and students. The CCI is developing key datasets, based on the best available Earth observation technologies, for use in understanding changes to the Earth’s climate. Twenty-two essential climate variables (ECVs) are being developed and made available to climate modelers, with data stretching back in some cases more than forty years.
As well as producing clear and easy-to-understand data visualisations in the form of 2D maps, 3D computer graphics and linear animations, a key goal of the CCI knowledge exchange activity is to present the very long time-sequences of ECV data to the public on an interactive virtual globe. This allows the user to explore the climate data at their own pace, compare related climate variables, and discover for themselves patterns, relationships, climate events and trends. In the first phase of the project a prototype visualisation tool was developed for desktop computers and a digital book app was completed for iPad and Android tablets. In the current phase a web app has been developed to simplify deployment and update, and to reach a wider audience. The app’s interactive data viewer is used alongside engaging text and photos to tell compelling stories about Earth’s climate and the way it is changing.
Previous work in this field has proved to be useful in education, and it is a goal of the current activity to tailor the visualisation products for use in formal educational settings. A curriculum analysis has therefore been conducted across Europe and across levels, from primary to tertiary education, and used to guide the development of the project’s story and animation content. The project team includes education specialists from ITC and Museon in the Netherlands and the National Centre for Earth Observation in the UK, with broader user requirements coordinated by the German Meteorological Service.
After matching a shortlist of 30 story suggestions against climate-related curriculum topics, 12 stories were developed for the Climate from Space web app. Similarly, five 3-minute animations were produced from a shortlist of 10. A narrative approach was taken for the stories in the app, rooted in human experience, with the focus not so much on the data, but on telling “Earth system stories”. The Earth observation data are related to components of the Earth system, such as the carbon cycle and the water cycle, and to the challenges facing society due to climate change. In both the interactive data viewer and the stand-alone linear animations care is taken to follow best practice for scientific data visualisation and science communication. In the animations, visualisations of the CCI's global climate data products are supplemented by computer graphic representations of microscopic processes, such as aerosols seeding cloud formation, and with conceptual illustrations of, for example, the carbon budget and the volume of ice Earth loses each year.
The web app and animations are included in lesson plans developed for primary and secondary education, and in a specially-developed MOOC. They are also presented as exhibits in museums and science centres. The time-sequence maps are made available outside the web app for use on custom display hardware such as touch-tables and spherical displays. ESA and the UK Space Agency have used the material in this way in their own exhibition spaces and in public events such as the annual UN climate summits. The animations are published on ESA’s website and social media channels and made available for use by broadcasters. Future work will look at tighter integration between the data viewer and the stories, improved performance over the web, and making the app, stories and animations more suitable for use in the museum sector.
Planet operates approximately 200 satellites in low Earth orbit, capturing imagery over the whole Earth landmass, coral atolls, and nearshore coastal environment on a near-daily basis. Our constellations capture approximately 25 TB of imagery per day. Across the scientific community, from universities to ESA, NASA, and the German Aerospace Centre, students and researchers have used Planet data to publish over 1,500 journal and conference papers.
Our PlanetScope constellation collects near-daily coverage of Earth’s landmass at 3–5 m resolution in 4 or 8 VNIR bands, while our 21-satellite SkySat constellation acquires 50 cm imagery across 5 VNIR bands via a traditional tasking model. The archive for these two constellations extends back to 2014. Through 2019, we also operated the 6.5 m resolution RapidEye constellation, with an archive stretching back to 2009.
To facilitate Earth science research, Planet offers multiple data access pathways for the European community:
1. Planet’s Education and Research (E&R) Program: Through our Education and Research Program, Planet provides university access to PlanetScope and RapidEye imagery (up to 5,000 square kilometers per month) for non-commercial research applications by application. Our E&R Program currently hosts over 10,000 users across 70 countries and more than 1,000 universities. Any university-affiliated student, faculty member or researcher may apply to the E&R Program for limited, non-commercial access. A university email address is required, and nonprofit and government employees are not eligible. To access this data source, users can apply at go.planet.com/research.
2.ESA Earthnet Program: Any researcher, including nonprofit researchers and those at government institutions or non-commercial early adopters, may apply for access to PlanetScope, RapidEye and SkySat imagery through the European Space Agency Category 1 Portal.
3. RapidEye Science Archive (RESA): Any German researcher, including nonprofit researchers and those at government institutions, may apply for access to PlanetScope, RapidEye and SkySat imagery through RESA. More information is available (in German) at: www.planet.com/resa/resa-registrierung/
Young generations represent a juncture between understanding the potential
hazardous impact of climate change on society and local communities. In this frame,
STEAM education in school proved its ability to nurture students’ curiosity and cognitive
resources, provide them with the right tools to understand the world’s complexity and
face the challenges that the current times are posing, like climate change, among many
others. However, STEAM subjects are not always part of educational curricula: according
to the OECD Programme for International Student Assessment (PISA) report 2018, more
than 20% of pupils in the European Union has insufficient proficiency in reading,
mathematics or science. Such a lack of diversity in the offer may decrease pupils’
motivation to pursue STEAM academic paths, often perceived as highly theoretical and
complex.
The improvement of STEAM education in secondary schools is the core objective of the
Erasmus+ funded project “GIS4Schools”, which aims at promoting a new innovative
approach to foster the teaching of STEAM subjects in secondary schools across four
different European countries: Italy, Portugal, Romania, and Spain. The project intends
to introduce the education of GIS and satellite technologies for Earth Observation - rarely
adopted in secondary schools - and applying them to the thematic area of Climate
Change. GIS4Schools will combine Inquiry-Based Science Education (IBSE) with Problem
Based Learning (PBL) approaches to an interdisciplinary contextualisation of the science
topic. Pupils will actively contribute to the co-creation of new knowledge by assessing
with GIS tools the impacts of specific climate challenges affecting their local community
thanks to Copernicus products, Sentinels’ satellite-derived information, and other
ancillary data.
The paper will illustrate the genesis of the project, and the process leading to the
development of the training packages for secondary schools’ teachers and pupils.
Furthermore, the paper will explore the methodology and pedagogic approach adopted
to transfer new knowledge from teachers to pupils and how this can impact their
perception of STEAM subjects. The paper will include also an overview of the case
studies developed by each school in view of the conclusion of the second year of the
project.
Specific attention will also be dedicated to the description of the innovative tool
developed and applied for the learning curve monitoring and evaluation of both
students and teachers.
Lastly, the paper will briefly assess the outreach of the project with a forecasting of
future activities with external partners across Europe and beyond.
Earth observation imaging spectroscopy data are a valuable source of accurate and quantitative information about terrestrial and aquatic ecosystems of the Earth required in various application fields. While the current availability of imaging spectroscopy data is still limited, it can be expected that data availability will substantially increase in the near future with the rising number of imaging spectrometers deployed on airborne and spaceborne platforms. In view of these developments, a strongly increasing interest in hyperspectral data analysis in research and education is expected in the next few years. The overall need for Earth Observation (EO) education and training activities is currently also reflected by a growing number of online learning platforms and Massive Open Online Courses (MOOCs) for various EO sensors and application fields. However, hyperspectral remote sensing is not yet well represented in the online learning system. Therefore, in 2019 the development of HYPERedu was started as part of the EnMAP science program within the German EO programme by the DLR Space Agency.
HYPERedu is an online learning initiative for hyperspectral remote sensing. On the one hand, it provides online learning resources on principles, methods and applications of imaging spectroscopy at master’s level addressing students as well as professionals in research, business, and public institutions. The resources comprise annotated slide collections and hands-on tutorials (based on the EnMAP-Box software) that are continuously extended and increasingly used in training courses as well as university teaching.
On the other hand, a series of MOOCs is being developed in HYPERedu as part of the outreach and dissemination activities of the EnMAP science programme. A first (to our knowledge first ever) MOOC on the basics of imaging spectroscopy titled “Beyond the Visible: Introduction to Hyperspectral Remote Sensing” was successfully opened in November 2021. It teaches the principles of imaging spectroscopy, sensor technologies and data acquisition techniques as well as data sources and software using state-of-the-art eLearning approaches. The course is structured in three thematic lessons and offers plenty of opportunities for activity and interaction such as interactive graphics, quizzes and expert-led hands-on training exercises. It is designed to take about five hours to be completed at one’s own pace. After successful completion, participants receive a certificate. Lessons learned during the course from user feedback in the integrated survey and discussion forum will be evaluated in detail and will be used to revise the basic MOOC, which will be offered in a revised and extended version in spring 2022. The insights gained will be further used in the development of shorter follow-up MOOCs from 2022 onwards. These shorter MOOCs will complement the basic MOOC by focusing on specific hyperspectral data application fields such as agriculture, inland and coastal waters, soil and geology and urban environments.
All resources and courses are hosted on the EO-College platform (https://eo-college.org/) and are provided free of charge under a Creative Commons 4.0 International License (CC-BY), except where noted otherwise. EO-College is a Learning Management System (LMS), content repository, discussion forum and information hub for open educational resources and online courses. The EO-College platform is funded by the German EO programme of the DLR Space Agency and was specifically developed for learners from the EO community. It also serves as the German contribution to the eLearning activities of the Working Group on Capacity Building and Data Democracy (WGCapD) of the Committee on Earth Observation Satellites (CEOS). The CEOS WgCapD supports coordination and partnerships among CEOS Space Agencies and other capacity building networks offering EO-Education & trainings, especially to increase the capacity in less developed countries for effective use of EO data for the benefit of society and sustainable development.
The development of HYPERedu is coordinated by the German Research Centre for Geosciences (GFZ) Potsdam in cooperation with the University of Jena, the Humboldt Universität zu Berlin, the Ludwig Maximilians Universität München and the Alfred-Wegener-Institute (AWI) with contributions from the EnMAP Science Team and several partners in the hyperspectral EO community. Even though HYPERedu was initiated as part of the EnMAP mission science programme, it is regarded as an initiative by and for the whole hyperspectral community, where on the one hand an increasing number of groups are already contributing to the development of HYPERedu and on the other all resources are provided free of charge for use in training courses, university teaching or individual learning to increase the number of users employing hyperspectral data in the future.
Even today, teaching Geography and understanding the relationship between physical and human elements is traditionally done using print atlases and individual maps, dry statistical data and textbooks. Regularly, teachers prefer these analogue resources against digital ones due to the fear of unfiltered or unsuitable geo-information, being unaware of open resources or being unable to identify among the large or scattered repositories and ultimately costs and training constraints. Nevertheless, studying Geographic Information Systems (GIS) in the United Kingdom is mandatory for the revised Geography curriculum.
Geographic Information Systems (GIS) have been proven to be an asset in assisting Geography lessons in secondary school (Year 7-11) in both teaching and learning processes [1]. Visualising, manipulating and analysing geospatial and remote sensing data equip children and young students with new skills. Through the nature of these techniques, they acquire enhanced numerical or analytical competencies. Moreover, it contributes to the development of spatial thinking, which is the ability to identify, analyse and understand the location, scale, patterns and trends of the geographical and temporal relations between data, phenomena and issues.
To bridge curriculum requirements and equip teachers with new skills in the field of GIS and Earth Observation (EO) and offer pupils a new perspective for career progression in an innovative sector, the Geospatial & Earth Observation for Schools (GEO4Schools) program was created.
In this presentation, we will describe the development of the program in the context of the COVID-19 pandemic between January - May 2021 and subsequent delivery results. Three Geography teachers, together with 50 pupils from Ellesmere Port Catholic High School, Cheshire, United Kingdom, have participated in a pilot GEO4Schools program, learning about the use of Geospatial Information Systems (GIS) and discovering Earth Observation data and technology, digital maps and innovative developments in the geospatial sector. The program aimed to promote digital geospatial data and satellite imagery in the classroom and monitor the development of complex competencies in pupils. Practical activities developed as part of the program addressed the urbanisation and change detection of Liverpool city between 1975 and 2020, thermal radiation and radar data basic interpretation, and looked towards water scarcity and plastic pollution as societal challenges. The project concluded with the GEO-Involve project that highlighted the creativity and problem-solving skills children often excel in. At this stage, pupils were asked to assess their community issues through the opportunity lens and have successfully applied the newly learned techniques and methodology to understand better and visualise geospatial data's impact on societal development.
Albeit a pilot, the project has successfully proven that engaging children and young students in geospatial activities in aiding their cognitive development and creating a new generation far more aware of the natural and human interconnections.
References:
[1] - Ali Demirci et al., Implementation and Effectiveness of GIS-Based Projects in Secondary Schools", 2013, accessed on 26 November 2021
She Space International is a first-of-its-kind educational program designed to inspire young girls (14-18 years old) to study science, technology, engineering, arts and mathematics (STEAM) subjects. The basic premise of the program is that exposure to female role models and advanced scientific disciplines, especially in an active research context, encourages young women to continue studying and engaging with science throughout their educational and professional careers. She Space aims at increasing the participants´ comfort with science and their perceptions of their science abilities by decoupling science from preconceptions and existing gender stereotypes. In last year´s program Brazil, Germany, Israel, Spain, Peru, Togo, and the United States participated. During the project, teams of students from each country produced a joint climate change study based on remote sensing and advanced image analysis techniques.
The German Team counted six students (ages 15 and 16), from three different schools in the area of Munich. Under the increasing number and intensity of forest fires in our changing climate, the goal of the German project was to analyze the connection of climate change and forest fires and study a forest fire in Paradise, California in 2018. For the analysis of the Paradise forest fire remote sensing methods in ESA´s LEOWorks and the open source GIS program QGIS were used. Those included in the first step a visual analysis, consisting of a real-image analysis and a false color image analysis of images before and after the fire. Additionally, the calculation of indices using Sentinel-2 satellite data, were used to analyze the impact of the fire (dNBR) and its effects on vegetation (NDVI). Data from DLR´s (German Aerospace Center) FireBIRD mission were included in a hotspot analysis. The students visualized their results in thematic maps using the program QGIS. To summarize their results the participants produced presentations that were delivered in the internal She Space context, as well as a final report.
The student group was guided and mentored by faculty and two students from the School_Lab within the DLR. The project was accompanied by international meetings, were the project was introduced, connections across teams were initiated and the teams presented their progress and final results. The German educational and organizational concept was heavily impacted by the Covid-19 pandemic, leading to an almost all-online meeting approach. Background information on remote sensing and its application were provided in the meetings, as well as information about climate change and forest fires. During the meetings active learning was also encouraged by the inclusion of elements of a flipped classroom and autonomous learning. For this worksheets and breakout rooms were used, as well as self-coordinated meetings among the participants. To deepen the retained theoretical knowledge on remote sensing and image analysis, a field campaign was conducted and its results used for the overall analysis.
In order to best highlight the impact that the She Space International program had on the members of the German Team, the participants themselves will recount their project, their experiences in working on a scientific project and the impact it had on their perception of science and their educational future.
Introduction
The era of the fourth industrial revolution is the time we can observe entwinement of comprehensive integration of IT in all spheres of life and global problems, which are becoming more and more acute every year and turn into a threat to the future of humanity. Although these two processes tend to increase, IT might play a crucial role in improving the environmental situation, thus, education should prepare the young generation for life under these new circumstances and in the new reality.
The use of space imagery in the educational process enables obtaining full, simultaneous coverage of a huge area at any time, which is achieved due to the high frequency of space survey and long operation of spacecraft. Currently there are a lot of artificial satellites equipped with devices that provide observation of the Earth in different spectral ranges from ultraviolet to thermal.
Remote Sensing (RS) and Geographic Information Systems (GIS) belong to the fields of science education which have been actively promoted in the education systems in different countries, in particular in Ukraine. This is driven by the current need for the introduction of satellite imagery analysis not only in research but also in education since the students already use, directly or indirectly, the results of satellite imagery analysis and GIS-based image processing in their everyday life
Junior Academy of Sciences of Ukraine (JASU) is a state-funded extracurricular educational system that develops and implements methods of science education. JASU has more than 250,000 students working in 64 scientific areas. In 2018, the Junior Academy of Sciences of Ukraine received the status of Category 2 Science Education Center under the auspices of UNESCO and joined the network of Copernicus Academies. All activities of JASU are aimed towards enhancing youngsters’ climate literacy, skills and competences as well as to develop the pupils’ core values and attitudes in STEM disciplines, and to inspire and to motivate them to pursue studies and careers in the STEM sector.
GIS and RS have been one of the fields of active development in JASU for the last 10 years. In 2019 RS and GIS laboratory was established on the basis of JASU. Main aims of laboratory activities are capacity building on GIS and Remote Sensing and development of critical thinking and climate literacy among young generations around the world. The laboratory is reaching these goals through a set of actions and projects.
Main activities are developing educational materials (workbooks, manuals, methodological instructions) both in Ukrainian and English languages, organizing competitions and courses for children and educators on national and international level.
Currently there are several recurrent events, among them International and All – Ukrainian schools on Remote Sensing for school students, All-Ukrainian and International courses for Educators and Ecoview (“Ecopohlyad”) all- Ukrainian competition where school students present their scientific works conducted with the use of Earth Observation data.
In 2021 RS and GIS laboratory organized 2 international events: International Summer school on Remote Sensing in June and Remote Sensing Course for Educators in November. Both were carried online which made it possible to engage participants from various countries despite the Covid-19 pandemic and travelling limitations around the world.
2. International Summer School on Remote Sensing
In International Summer School on Remote Sensing took part 36 school students from Guatemala, Poland, Ukraine, Iran, Lebanon, India, the Philippines, Indonesia and Thailand aged 14 to 18 years. Moreover, they encouraged teachers from their schools to join this course to gain new skills and experience. The course was held from June 21 to June 26, 2021 online (via ZOOM). The curriculum consisted of the following blocks: lectures, hands-on training and consultations. On the last day of the course students presented mini - projects (for each group a research advisor was invited to consult the group on their project). Topics were chosen independently, in general students chose problems of climate change in their region. Data from Sentinel 1 and 2 and from the Landsat mission were the main source of satellite imagery. Some of the selected topics were deforestation in Iran, water pollution monitoring in Ukraine, consequences of floods and soil erosion in the Philippines, crop monitoring in Poland, and many others. Main web-platforms which were used during study are EO Browser from European Space Agency and Giovanni web- service created by NASA.
Before the start of the course, a survey of students was conducted to determine what results students expect from participation, as well as the level of their RS knowledge. Most students said they expect to learn about how to monitor climate change in their region and country, so the curriculum was more focused on these issues. In the post-training questionnaire students noted that about 80% of all information they received during the course was completely clear to them. Thus, 98% of students declared that they would use the knowledge gained during the course in the educational process and their own research, 2% - said they would at least try to use it. One of the comments from a student from the Philippines about the course in the final survey: “Before, I was really interested in research topics regarding biochemistry, and I was hesitant to explore topics regarding Earth science or physics. Now, after attending the summer school, I was able to realize that each branch of sciences - biology, chemistry, and physics come hand in hand. I believe that Remote Sensing is not limited to physical science, but it can also be a lot of help in research regarding the life sciences. I am actually considering Earth science as my career path in the future after coming up with this realization». The most significant in terms of efficiency and organization of the event was the 100% positive answer to the question "Do you want to participate in this training or other courses organized by our team next year?"
3. Remote Sensing Course for Educators
Inspired by the initiative of school students to invite their teachers to take part in International Summer School on Remote sensing, RS and GIS laboratory organized Remote Sensing course for Educators. Teachers of various subjects such as Geography, Biology, Science, Physics and others were invited to take part in. Around 60 applications were received from teachers of Biology, Geography, Mathematics and Statistics, Physics, Natural Sciences, Chemistry, Philosophy, Logic, Robotics and Computer Technology, Agriculture, Informatics, Social sciences and even Philosophy. Applicants were from 12 different countries and working in educational institutions of different levels, starting from primary schools to universities. Finally 20 participants successfully finished this course which was held online from 22 to 26 November, 2021.
The curricula of the course included lectures and hands-on training. Were conducted 2 surveys before and after the course. Among participants 55% have already heard or tried to use RS and 45% were not engaged with it before the beginning of the course. Less than 10% have already used RS during their lessons.
The main motivation of participants was gaining new skills and experience, collaboration opportunities with other participants, learning how to make maps using Earth Observation data, supporting decision making, monitoring floods, water bodies, vegetation and urban areas. Evaluation of participants' knowledge on RS is shown in the picture below.
Picture 1 - Evaluation of RS knowledge before and after the course
In the post-course questionnaire to the question “Would you like to participate in an Advanced Remote Sensing course in the future in order to deepen your knowledge?” 80% of respondents answered “Yes”, 5% - “No, because knowledge acquired during this course is enough for my teaching activities” and 15% chose the “I’m not sure” option. All participants answered positively on a question “Will the knowledge and skills acquired during this course help you to teach your students using satellite imagery?”. Also a valuable point, which participants mention in their answers was that 6 years is the age from which Remote Sensing could be introduced to school students.
4 Conclusion
The use of Earth Observation data in the educational process affects the formation of such skills of school students as: critical thinking, creativity, ability to logically justify their position, ability to solve problems, assess risks of decision making, ability to communicate and cooperate in a team. Based on the satellite images analysis and processing, students develop skills of monitoring processes and phenomena in space and time, which affects the ability to see the problem "comprehensively", integrating knowledge from various disciplines such as geography, biology, ecology, physics, computer science and more. This interdisciplinary approach to research is particularly effective in studying climate changes, where satellite imagery is one of the primary sources of information.
Open schooling plays an essential role in the new approach in the education. The open schooling concept leads to building a participatory learning community, which particularly invites the youth, reduces societal disparities by building points of access for science discovery. Conceptualizing ecosystems requires decision makers and citizens to understand the interactions of the various (independent or nested) subsystems, to provide solutions for environmental problems, to introduce new technologies and research methods and to demonstrate the cumulative effect of individual action and stewardship. Open schooling addresses societal challenges with solutions that simultaneously provide environmental, social and economic benefits and help to reverse social disparities and build resilience.
The project PULCHRA - Science in the City: Building Participatory Urban Learning Community Hubs through Research and Activation - explores the open schooling concept in the theme "Cities as urban ecosystems”, in view of creating new partnerships in local communities to foster science education for all citizens (https://pulchra-schools.eu). Schools, in cooperation with other stakeholders, are agents of community well-being, considering that the theme encompasses the natural environment, the built environment and the socio-economic environment in cities. This is of great importance, considering that the urgency of cities to be approached as urban ecosystems is underestimated and limitedly linked to science education for all citizens. PULCHRA aims at building a learning, exploring and activation network, which allows one to experience and understand the urban ecosystem as living organisms.
The methodology is based on pilot themes (termed as City Challenges) which create know-how, build trust in the science approach based on own experience, facilitate skilled use of tools and support community building as they are based upon the identity of the communities in which they take place. City Challenges are situated in our own living environment. Engaging in environmental education has a direct impact upon the community and the personal lives of the participants. The benefit of international cooperation of the EU member states becomes obvious at the community level.
A City Challenges (technological) Platform is developed to bridge partners, schools and stakeholders; mixed Science Teams and students acting as City Reporters will explore and disseminate the City Challenges respectively. The Platform enables bringing new scientific knowledge for the city as an urban ecosystem and facilitating participation of citizens of all ages in scientific discovery; building trust in the method of science through the own experience of participation. Using Platform, the pilot themes/project are explored within the project exhibit richness in science as they take note of several scientific fields as related to cities as urban ecosystems, they are supported by technology, bring in innovation and are directly linked to SDGs and the European policies for cities.
One of the most important parts of the research methods and data sources for the project under City Challenges is Earth Observation/Remote Sensing. Open data and open technologies of Earth observation enables to a wide use and implementation in the student projects. Achieved results and solutions are very attractive for education as well as for stakeholders.
The results of the PULCHRA project prove that Earth observation data are understandable and acceptable for both students and teachers. Cloud-based technologies are able to process data efficiently and to calculate and visualize the results of the analysis very fast and accurately. This seems to be very important aspect for a wide and deep implementation of Earth Observation in the education. For this reason, a City Challenges Platform has a cloud-based platform for processing satellite data. The students pilot project uses data from Copernicus programme mostly and topics of soil moisture and vegetation health and changes are the most popular and solved by students.
Despite its proven capabilities and the increasing availability of regularly-sampled free-and-open data, the uptake of Synthetic Aperture Radar (SAR) resources to inform decision-making situations has remained slow. This has largely been due to the unfamiliar appearance and large data volumes associated with SAR images, as well as due to a set of processing routines that often seem exotic to many non-SAR experts. Therefore, there remains a need for improved capacity building activities to advance the use of SAR by the research, practitioners, and decision-making communities.
The NASA-funded SAR Capacity Building Center (SAR-CBC) project has been addressing this issue by developing educational materials as well as webinars and on-site trainings that build capacity in the use of SAR in decision-making situations. Led by the University of Alaska Fairbanks, associated with the AmeriGEO initiative, and collaborating with SERVIR, a joint NASA and USAID program, our work focused specifically on Central and South America, and all training materials are tailored to the needs of this region.
Over the course of three years, the SAR-CBC project has employed various training modalities with the goal of adapting to the needs of end-users operating in different spaces (academic; decision-making; practitioners). To ensure the relevance of developed material, the SAR-CBC team closely partnered with in-region stakeholder organizations in Ecuador (IIGE; Central University of Ecuador), Colombia (IDEAM), and El Salvador (MARN). These stakeholders were instrumental in the assessment of in-region application and training needs. They also supported the identification of venues for in-person trainings, helped in communicating news and updates to our target communities, and provided information on relevant case studies for lab exercises.
Before the COVID-19 health crisis, in-person workshops were conducted in Ecuador and Colombia to audiences from the academic and decision-making communities. In 2020, a virtual week-long workshop was held for scientists and decision-makers in El Salvador, with two days devoted to theory for a wide audience and three days dedicated to training technical end users. An edX asynchronous online course on the application of SAR for Hazard Monitoring (https://www.edx.org/course/sar-hazards) was also developed in collaboration with the University of Alaska Fairbanks, and has had over 3000 students participate in the inaugural run of the course. Furthermore, cloud-based computational lab environments were developed to facilitate similar learning experiences in both in-person and virtual formats. Currently, a unique collaboration is in place with a network of universities, professors, and students in Ecuador, where the SAR-CBC team is acting in a mentorship role for existing SAR-related research projects.
In this presentation, we will outline the different training modalities that were employed by the SAR-CBC project. We will summarize our experiences with each approach and reflect on how a combination of these modalities can provide broader access to capacity building in the field of SAR to better serve a wider range of end-user needs.
Nile Delta is historically targeted for various types of human activities and life needs since the Pharaohs’ era. This is mainly due to the wealth of the natural resources existed at the Deltas in its creation including fresh water, flat fertile soils, recreation places, mineral resources, coastal lakes and fish farming. This attracted an excessive increase in population, civilization and accompanied developmental plans to accommodate such population growth. Geologically, the Nile Delta formed by progression of a complex system of deltaic fans throughout the Pleistocene, with the Modern delta being formed from sediments supplied by at least ten distinct distributaries throughout the Holocene. The land subsidence of the northern delta has become a topic of major concern to the Egyptian population and government, reaching up to 10 mm/year. Maximum estimated land subsidence levels are in the eastern part of the Nile delta. Hence it creates major impact on the infrastructure and development in the area. Moreover, it is worsen the impact of sea level rise due to climate change; since the cumulative subsidence together with the scenarios of sea level rise will double the coastal flooded area. In the present work we utilize SNAPPING service on the Geohazards Exploitation Platform (GEP) to monitor the spatial and temporal patterns, as well as the magnitudes of urban land subsidence in the Nile Delta. We are aiming to explore and document the potential of Earth Observation platform-based solutions as an operational tool for measuring and monitoring of land subsidence at millimeter level accuracy over wide areas such as the whole Nile Delta.
Since December 2011 the Radiation Explorer in the Far-Infrared (REFIR) FTS has been operating in continuous acquisition mode at Concordia Station, Eastern Antarctic Plateau, providing spectrally resolved atmospheric downwelling radiances measured in the 100-1500 cm-1 (6.7-100 um) range with an acquisition interval of about 10 minutes.
The East Antarctic Plateau region covers a peculiar role in the Planet's climate, being the most important radiative sink of the Earth system. Besides this, the simple structure of the Plateau around Concordia Station, devoid of any significant orographic structure, provides a simple, easy to model case study.
The time series of acquired data already covers a decade, and has been exploited, in the framework of the Dome C Tropospheric Observatory (DOCTOR) project, funded by the Italian Antarctic program, to perform a comprehensive characterization of the Antarctic troposphere.
The full dataset has been reanalysed in order to provide consistency and continuity in frequency and radiometric calibration, together with the application of a new correction algorithm addressing systematic effects as optics contamination during the unattended operation periods.
Vertical profiles of tropospheric temperature and water vapor have been obtained from the downwelling radiances through the use of a retrieval code based on the LBLRTM forward model. These profiles, in turn, can be used to reconstruct the vertical structure of the radiative processes in the troposphere, both in terms of upwelling and downwelling fluxes, and cooling rates, through the use of the RADSUM code.
This kind of analysis is capable to provide a comprehensive description of the status of the atmosphere, but is limited to clear sky conditions, since all the physical parameters describing the clouds, in particular the vertical structure, cannot be extracted from the spectral radiances alone.
For this reason, since 2020 the REFIR FTS was complemented with a pseudo-random noise (PRN) modulated lidar, based on a 1 W, 808 nm laser diode, to be used as a cloud profiler in order to provide the needed information on cloud vertical structure to the data analysis process.
The PRN lidar features a very small footprint both in terms of size/weight and power consumption (less than 10 W), a feature which allowed to install the instrument inside of the frame of the REFIR FTS, resulting in a compact package. The vertical resolution of the lidar is 10 m, while the range is about 3 km, which corresponds to the interval providing most of the information to the profile retrieval. The measurement repetition rate is about 2 minutes, in order to provide an accurate matching with the REFIR spectral radiances.
Adding the vertical backscatter profiles as an auxiliary input to the vertical profiles retrieval process not only improves the retrieval performance in cloudy sky conditions, but also allows to perform the calculation of fluxes and cooling rates in a much wider range of atmospheric conditions.
This is of particular interest if we consider the main strength of the continuous acquisition and fast measurement rate provided by the integrated lidar and spectroradiometer system, which is to be able to detect and characterize fast-evolving phenomena as the diurnal cycle during tha Antarctic summer, or the sudden warming events occurring during winter.
An example of the diurnal cycle is shown in the figure, where the nighttime temperature inversion layer near ground is clearly visible in the vertical temperature profile map, together with the low clouds forming in the early morning (black lines) as detected by the PRN lidar.
The warming events, also, are a main target of the DOCTOR project in that they are one of the causes of the wintertime warming of the troposphere in the East Antarctic plateau, and the capability of the spectroradiometer-lidar synergy of monitoring both long-term trends and fast-evolving phenomena qualifies as the ideal tool to perform this kind of study.
Within the emerging research themes related to energy, the water-energy-food nexus is gaining increasing attention, and a big effort is required for gathering consistent data from different sources. As reported by The Food and Agriculture Organization [1] "in order to assess nexus interactions, reliable, pertinent and timely data is needed. Satellite observations, combined with in-situ data, provide a unique source of consistent information about the natural environment, on which we rely to produce water, energy and food".
In the context of the relevance that the nexus has towards sustainable development goals, priority importance is given to the competition between the energy use and the agricultural use of land.
A fascinating recent technological solution which fosters the coexistence of a couple of land uses is represented by agrivoltaic, which is a hybrid agriculture-energy system in which agricultural crops are grown at the partial shade of the solar infrastructure. This combination fosters photovoltaic plants development with a low environmental impact without compromising agricultural land use.
The agrivoltaic could, also, reduce water consumption in the agricultural field, limiting the losses due to evapotranspiration (ET) thanks to the shadowing provided by the solar panels [2]. Thus, evapotranspiration is a key parameter for water energy food nexus studies. In particular, evapotranspiration could be a proxy for identifying areas at risk of water and climate stress or agricultural land under high potential risk of abandonment, where the combined value of energy and agricultural production could make their recovery economically sustainable [3].
The evaluation of the effective evapotranspiration constitutes the central element of the hydrological balance, both because it constitutes the highest rate among those into which the meteoric influx is divided, and because his estimate is affected by high uncertainty [4].
The European Space Agency (ESA) studied and funded a new approach within the Sentinels for Evapotranspiration (Sen-ET) project [5]. The project developed a methodology for estimating ET based on ERA5 data from the European Center Medium Weather Forecast (ECMWF), Sentinel-2 images and Sentinel-3 Land Surface Temperature (LST). The project aims to develop an open-source software application for accurately modelling instantaneous evapotranspiration at high and medium spatial resolutions for agricultural applications. The Sen-Et model has been validated with some in-field measurements for agricultural usage [6]. The procedure is provided with a dedicated plugin, called Sen-ET, inside the SNAP software developed by ESA. The process is composed of around twenty steps, which users can perform through the SNAP graphic interface or the command-line scripts. In both cases, the user has to set some parameters and manually select for each step the requested input products, which are mainly the products that have been generated by the procedure in the previous steps. Thus, the actual procedure requires multiple steps with manual selection for each daily maps.
The aim of this work is to facilitate the use of the Sen-ET plug-in with the future ambition to exploit it for continuous monitoring of evapotranspiration and land subject to water and climate stress.
The process has been revised and automatised to improve and provide the possibility of computing monthly maps of different areas with less user effort, supporting in this way also the comparison of Sen-ET results with different literature evapotranspiration maps or hydrological model results. In particular, an automatic procedure has been created to connect all the different steps provided by the plug-in.
The procedure reduces the number of steps required by Sen-ET to only one.
The user has to indicate only the input parameters and download the input data. The selection and the download of the input data are left to the users because its automatisation could lead to errors related to the variety of involved aspects (e.g. the cloud coverage on the particular area of interest or the possibility that the existing time lag before images are actually available for download).
The final outputs of the automation are daily maps for each daily input image saved as GeoTIFF, which can be further processed in common GIS software.
Within this work, the whole procedure has been tested on satellite data acquired in different regions of Italy in different months and years. To this aim areas with different climate, land and geographical conditions have been chosen for the test. The test has regarded different seasons, considering also rainy months, when the contribution of irrigation is less significant.
As an example of results comparison, we made reference to the BIGBANG model developed by ISPRA (Italian institute for environmental protection and research), [4] and the GlobWat model, developed by the Food and Agriculture Organization of the United Nations (FAO).
Both the models produce monthly maps as outputs. BIGBANG maps refer to specific years from 1951 to 2019. GlobWat model outputs are monthly average, to be considered valid for year 2004 since “average of the years for which cropping calendar data are available is 2004." [7]. Differently, the Sen-ET final outputs are daily values of energetic variables and daily evapotranspiration, which have to be converted into monthly maps for comparison.
The comparison between Sen-ET evapotranspiration maps and the two discussed hydrological models has shown promising results for continuous monitoring of land subject to water stress. The complete discussion of the results will be illustrated in the final presentation.
[1] FAO, 2014. The water-energy-food nexus. http://www.fao.org/3/bl496e/bl496e.pdf.
[2] Barron-Gafford, G., Pavao-Zuckerman, M., Minor, R., Sutter,L., Barnett-Moreno, I., Blackett, D., Thompson, M., Dimond,K., Gerlak, A., Nabhan, G., Macknick, J., 2019. Agrivoltaics provide mutual benefits across the food–energy–water nexus in drylands. Nature Sustainability, 2(9), 848–855
[3] H. Dinesh e J. M. Pearce, «The potential of agrivoltaic systems.,» Renewable and Sustainable Energy Reviews, vol. 54, pp. 299-308, 2016.
[4] Braca G., Bussettini M., Lastoria B., Mariani S. e Piva F, 2021, Elaborazioni modello BIGBANG versione 4.0, Istituto Superiore per la Protezione e la Ricerca Ambientale - ISPRA
http://groupware.sinanet.isprambiente.it/bigbang-data/library/bigbang40
[5] Sentinels for evapotranspiration (sen-et) project. https://www.esa-sen4et.org/
[6] Radoslaw Guzinski, Héctor Nieto, Evaluating the feasibility of using Sentinel-2 and Sentinel-3 satellites for high-resolution evapotranspiration estimations, Remote Sensing of Environment, Volume 221, 2019, Pages 157-172, https://doi.org/10.1016/j.rse.2018.11.019.
[7] Hoogeveen, Jippe & Faurès, Jean-Marc & Peiser, Livia & Burke, Jacob & van de Giesen, Nick. (2015). GlobWat - A global water balance model to assess water use in irrigated agriculture. Hydrology and Earth System Sciences. Hydrol. Earth. Syst. Sci.. 3829-3844. 10.5194/hess-19-3829-2015.
Several global population grids have been developed over the past decades, partly owing to the increased availability of Earth Observation satellite data. These grids, in raster format, contain estimates of the number of people residing within - or the population density of - each section of the Earth's surface. Prominent examples of population grids include the Global Human Settlement Layer (GHSL), the Gridded Population of the World (GPW), and the Global Urban Footprint (GUF). Such grids are constructed using different principles, and consequently each has applications for which it is best suited.
Raster datasets are very useful for a wide range of geospatial analyses, however, many applications do not manage population information on a per pixel basis, but rather per settlement basis, in vector format. In such cases, the footprint of a settlement is approximated by a geometric object, often simply a single point. This is the case for the open dataset GeoNames, as well as for commercial data products such as the World Cities Database from SimpleMaps. The ability to transfer information from rasters to vector datasets is thus an important capability for many end users. In particular, our motivating use case for this work is that of near-real-time loss assessment (number of injuries and fatalities) after earthquakes worldwide, using software operating on a vector data set of settlements.
We evaluate techniques for combining population grids with vector datasets in order to obtain population estimates per settlement, assuming the following typical situation: for a given country, the total population is known, as well as the population of the largest settlements. In terms of geospatial data, we have approximate locations for nearly all settlements, given as single latitude/longitude coordinates, and we know the boundary of the country. The objective is to obtain a population estimate per settlement.
When dealing with points, the issue is that the surface area of each settlement is not available, so this needs to be estimated. One option to obtain such an estimate is through Voronoi tessellation based on settlement locations. Such a tesselation consists in drawing lines which are equidistant between neighboring points, and extending these lines until they intersect with other lines, or with the boundary. This operation yields a set of polygons which collectively cover the whole area of interest in a non-overlapping fashion. The area of these polygons can be used as a rough approximation of the area of the corresponding settlements, allowing a spatial aggregation to be performed on the raster layer.
Alternatively, these polygons can be used as a starting point to an iterative algorithm which takes into account the population density as provided by the raster layer. For example, we can drive an optimization procedure to move the lines based on population gradient, so as to decrease the probability of cutting a settlement into multiple pieces.
We report on the performance and caveats of these algorithms based on their application to a selection of countries for which we know the actual settlement populations and administrative boundaries. We also examine the influence of population estimation methods on downstream analyses, by comparing earthquake loss assessment results for several earthquake scenarios.
There has been a rapidly growing interest in the use of Global Navigation Satellite System Reflectometry (GNSS-R) to monitor a variety of geophysical parameters over the last two decades. However, within cryosphere and hydrosphere studies, few have yet been dedicated to the retrieval of lake ice cover, which is an important physical feature playing a role in climate and affecting the economy and livelihood of northern regions. In this paper, GNSS-R technique is employed for assessing seasonal timing of annual ice cover (lake ice phenology) for Qinghai Lake, Tibet Plateau. To this aim, the Signal-to-Noise Ratio (SNR) values obtained from the Cyclone GNSS (CYGNSS) constellation from December 2018 to February 2021 were used to examine how reflected GNSS signals are modified by changing lake ice surface properties and the freezing/thawing states of the lake. A moving t-test (MTT) algorithm applied to SNR timeseries over three ice seasons allowed for detection of lake ice at daily temporal resolution. A strong agreement was found between ice phenology records derived from CYGNSS and those obtained from the visual interpretation of Moderate Resolution Imaging Spectroradiometer (MODIS) color composite images. Over the three years of observations, the error for CYGNSS freeze-up timing ranged from 3 to 14 days with an average of 7 days. However, the error for breakup timing ranged from 4 to 19 days with an average of 11 days and showed the sensitivity of CYGNSS to the onset of spring melt before moving into open water conditions. Moreover, all six t-score spikes appeared within the freeze-up and breakup periods visually obtained from MODIS images. In addition, results showed a drop in SNR values in the presence of ice cover compared to those from open water. We find that this incongruity with previous GNSS-R studies over sea ice, which have shown a higher reflection power from the sea ice surface, is due to differences in salinity and roughness of frozen lakes and oceans.
The “Federated Satellite Systems 3Cat-5” mission (FSSCAT) was the winner of the 2017 ESA S^3 (Sentinel Small Satellite) Challenge and overall winner of the Copernicus Masters competition. FSSCat consists of two 6 unit cubesats designed, developed and operated by Tyvak International. They carry on board UPC’s Flexible Microwave Payload – 2 (FMPL-2), an L-band microwave radiometer and GNSS-Reflectometer implemented in a software defined radio, and Cosine’s HyperScout-2 visible and near infrared & thermal infrared hyperspectral imager, enhanced with PhiSat-1, an on board Artificial intelligence experiment for cloud detection. Both spacecrafts include an optical inter-satellite link demonstrator, provided by Golbriak, and a proof-of-concept of satellite federations developed by UPC.
The mission and the main scientific results obtained during the three months of mission operations will be presented, including FMPL-2’s sea ice extent (SIE), concentration (SIC), and thickness (SIT), sea surface salinity (SSS), and the downscaled soil moisture (SM) products over the Northern hemisphere, and a summary of the HyperScout-2 commissioning and first operations activities. FMPL-2 microwave radiometry and GNSS-R data have been processed using artificial neural networks, estimating SIC maps with errors < 7 % in the Arctic and < 5.2 % in the Antarctic, SIE with an accuracy > 94 %, and SIT with a mean absolute error (MAE) of 6.5 cm (SIT < 60cm) using MWR data, and a MAE of ~15 cm (SIT > 1.5m) using combined data from FMPL-2 and CryoSat-2.
Deimos Eng. has developed PDGS, and data is open in the NextGEOSS Catalogue https://catalogue.nextgeoss.eu/
HydroGNSS (Unwin et al. 2021) has been selected as the second ESA Scout small satellite science mission, with a planned launch date of 2024. It is aimed at monitoring hydrological parameters, closely linked to GCOS-defined Essential Climate Variables (ECVs). HydroGNSS will prove new concepts and offer timely observations that supplement and complement existing satellite missions with high scientific priorities in ESA’s Earth Observation directorate. HydroGNSS aims at providing operational products of soil moisture (SMC), inundation or wetlands, freeze/thaw state and forest above ground biomass (AGB).
This study aims at addressing the retrieval of SMC and AGB by proposing two retrieval concepts based on Artificial Neural Networks (ANN) that have been developed and validated by using the NASA’s Cyclone GNSS (CyGNSS – Ruf et al. 2015) land observations in the framework of the ESA Ecology project and the ESA consolidation Study for HydroGNSS.
Along with one year of CyGNSS global land v3.0 data, reference data from the International Soil Moisture Network (Dorigo et al. 2011) and the SMAP Level 3 SM global daily product (Entekhabi et al, 2010) have been considered for implementing and validating the SMC algorithm, while the AGB pantropical dataset (Avitabile et al. 2016) was considered for implementing and validating the AGB algorithm.
Two global datasets of daily CyGNSS observables plus ancillary plus target SM have been implemented by aggregating data at different resolutions. The first dataset at 36 Km resolution (on EASE-Grid)) is used for evaluating the CyGNSS sensitivity vs. SMAP products (SM plus VWC plus Roughness) and pantropical AGB, and for training the ANN SM algorithm. The second dataset at 0.05° (≃5Km) is used for validating the SM algorithm versus in-situ measurements (ISMN) and training/testing the AGB algorithm.
At first, a sensitivity analysis of the CyGNSS observables to SM and Forest AGB was carried out at two different spatial resolutions of 5 and 36 Km, with the scope of understanding if, and to what extent, the synergistic use of other GNSS-R observables, beside the already assessed equivalent reflectivity (Santi et al. 2020), can improve the retrieval of both parameters.
The outcomes of the sensitivity analysis confirmed the possibility of estimating both SM and AGB from GNSS-R, while the high dispersion of the CyGNSS observations suggested the use of advanced algorithms to reduce the uncertainties and improve the retrievals.
The feasibility of retrieving SMC and AGB has been therefore exploited by implementing and testing retrieval algorithms based on Artificial Neural Networks (ANN). Taking advantage of the ANN capability in easily merging multiple inputs, several combinations of GNSS-R observables and ancillary data (e.g., topography and land use information) have been evaluated. The algorithms have been trained using the lower resolution datasets and then applied to the higher resolution dataset to generate global maps of the target parameters.
The obtained outputs have been tested and validated against the reference datasets: in particular, the SMC algorithm has been tested at 36 Km against SMAP data, obtaining an overall correlation R= 0.88, and validated at 5 Km against data from ISMN stations with site dependent but in general encouraging results. The AGB algorithm was validated at 5 km against subsets of the AGB map not involved in the ANN training, obtaining R≃085 and RMSE < 75 t/ha.
The results confirmed the feasibility of using GNSS-R for SM and AGB global monitoring: the retrieval is feasible provided that advanced algorithms (e.g., ANN) are used. The inclusion of ancillary information (topography, land cover and so on) was also also effective in improving the retrievals.
References
1. Ruf et al., 2015, DOI: 10.1175/BAMS-D-14-00218.1.
2. Avitabile et al., 2016, DOI: 10.1111/gcb.13139.
3. Dorigo et al., 2011, DOI: 10.5194/hess-15-1675-2011.
4. Entekhabi et al., 2010, DOI: 10.1109/JPROC.2010.2043918.
5. Santi et al., 2020, DOI: 10.1109/JSTARS.2020.2982993.
6. Unwin et al., 2021, DOI: 10.1109/JSTARS.2021.3089550.
Wetlands play a key role in the carbon cycle, emitting large amounts of carbon dioxide and methane through processes that are strongly dependent on the duration and timing of inundation. Therefore, measurements of inundation extent establish a benchmark for the current status of wetlands, which are currently poorly quantified, particularly in tropical, boreal, and coastal regions. Indeed, many traditional sensors have limitations in their ability to fully capture the temporospatial dynamics of terrestrial surface water extent in vegetated environments, since the vegetation canopy can obscure the presence of standing water from detection. Several recent studies with CYGNSS data have highlighted the ability of Global Navigation Satellite System Reflectometry (GNSS-R) to estimate inundation extent, even beneath vegetation, due to the presence of a strong coherent component in the forward reflections.
In this work, we quantify the sensitivity of L-band GNSS-R to inundation conditions with respect to SAR inundation signatures. Several products derived from both Level 1 and Raw IF CYGNSS data are compared with C-band (Sentinel-1) and L-band (NASA UAVSAR/ISRO ASAR) SAR measurements. The test site chosen for the comparative analysis is Yucatan Lake, an oxbow lake located along the Louisiana-Mississippi border (a UAVSAR-NISAR cal/val site), which is affected by seasonal inundations due to the overflow of the Mississippi River during heavy rainfall events. Yucatan Lake is well suited to our investigation being a land area with heterogeneous surface cover, ranging from open water to water underneath vegetation to only vegetation. The sensitivity of the L-band GNSS reflected signal to inundated areas appears more prominent than the sensitivity of SAR polarimetric backscatter to water under vegetation, particularly for C-band. The results obtained from the comparative analysis pave the way for possible synergies between polarimetric SAR and passive GNSS reflectometry observations for wetland monitoring, which will be discussed in the context of existing and forthcoming space-borne missions.
Here, we present two recent research directions of GNSS Reflectometry (GNSS-R) studies that meet each other – one provides the physical insight, and the other offers the tool for higher quality, and eventually, more varied data products.
Rain splash, altering the ocean surface state, leaves signatures detectable by GNSS-R over oceans induced by low winds (≲ 6 m/s). Recently, the potentials are investigated using combining both Left and Right Hand Circular Polarized (LHCP and RHCP) GNSS-R observations such as those in future missions like HydroGNSS. The feasibility of observing the modifications in the Sea Surface Salinity (SSS) due to the accumulation of freshwater is physically discussed. According to the theory, the SSS drop, due to the precipitation, increases the reflection power of the LHCP signals, whereas it decreases that scattered in RHCP. The potentials are also characterized using measurements of a coastal experiment. It is also shown how the signal power decreases due to the increased roughness by rain splash – e.g., the average LHCP power drops by ≈ 5 dB at an elevation angle of 45°.
Artificial Intelligence for GNSS-R (AI4GNSSR) refers to a series of interdisciplinary studies implementing deep learning in this novel remote sensing domain. Recently the capabilities of this modern data scientific method for an operational wind speed retrieval from the measured Delay-Doppler Maps (DDMs) is characterized. CyGNSSnet is developed based on convolutional layers for direct feature extraction from Bistatic Radar Cross Section (BRCS) DDMs, along with fully connected layers for processing ancillary technical and higher-level input parameters. CyGNSSnet-derived winds are evaluated over a temporally blind data set. They lead to an RMSE of 1.36 m/s and a significant improvement by 28% in comparison to the officially operational retrieval algorithm. We discuss the advantages and disadvantages of CyGNSSnet, and finally pose the question, if deep learning can assist us in debiasing the discussed rain effects in wind speed products and possibly detect rain using GNSS-R measurements.
Spire Global, Inc., a leading global provider of space-based data, analytics, and space services, designs, builds, and operates a growing constellation of Earth observation nanosatellites in low-Earth orbit (LEO). Spire satellites carry a state-of-the-art GNSS receiver to perform a variety of Earth observations: GNSS radio occultation (GNSS-RO) for atmospheric sounding, ionospheric slant total electron content and scintillation for space weather monitoring, GNSS reflectometry (GNSS-R) for soil moisture, sea ice, and ocean wind remote sensing, as well as supplying satellite state vectors (i.e., precise orbit determination (POD)) data that can be used to derive information about the thermosphere and solid Earth.
The Spire GNSS-RO nanosatellite constellation is currently the largest source of RO data in the world and operationally collects over 20,000 RO events per day. A portion of these atmospheric soundings are delivered to NOAA and EUMETSAT for near real-time data assimilation into weather forecasting models. Additionally, Spire provides these datasets, from Level 0 raw data to Level 2 derived atmospheric profiles, to NASA under the Commercial Smallsat Data Acquisition Program (CSDA) after a 30-day delay and to ESA researchers upon request through the ESA Earth Online Program. Recent internal and external studies from a variety of organizations have shown that Spire RO datasets are of similar quality to ones derived from larger RO instruments. The quality of Spire’s RO measurements, combined with the unmatched quantity and diverse geographical distribution of profiles derived from Spire’s nanosatellite constellation, yields a demonstrable positive impact on NWP forecast metrics.
In this presentation, we will overview the status and growth of Spire’s nanosatellite constellation in the context of RO collection and processing. We will present key statistics of Spire’s RO dataset from both qualitative and quantitative perspectives and also highlight results from third-party evaluations and operational assimilation of Spire RO. Finally, data products derived from Spire’s RO processing chain and available to users under the aforementioned data acquisition programs will be summarized for interested users.
At ECMWF we have performed ensemble data assimilation (EDA) experiments and Observing System Experiments (OSEs) using an extensive set of GNSS-RO observations, including Spire and COSMIC-2. This is done to study the spread-skill relationship and compare to previously performed theoretical studies. First results show that adding Spire or COSMIC-2 reduces the spread for temperature by about 9% at 10hPa in the southern hemisphere, whereas adding Spire and COSMIC-2 reduces the spread by 14%. In the tropics the addition of COSMIC-2 has the largest effect on reducing the spread by about 13% at 10hPa, whereas Spire reduces the spread by 5%. In general, the spread in temperature reduces with more GNSS-RO data being added, with the larger reductions happening in the stratosphere. When we compare this reduction in ensemble spread by adding new GNSS-RO data with the change in T+12h forecast error statistics in the corresponding OSEs, it can be seen that both measures are qualitatively consistent. Also, results show that this is partially true when ensemble spread is evaluated against radiosonde observations. The challenges when studying ensemble spread values and comparing them with forecast error statistics or observations are numerous. For example, one must be fully aware that for the EDA experiments the variability of the perturbations does not grow sufficiently through the forecast (under-dispersive) in some regions and height levels. This means the EDA can underestimate the impact of the addition of GNSS-RO data in these areas. Furthermore, the evaluation of forecast error statistics depends on the choice of analysis as a reference, which has limitations. Also, the model resolution of the experiments does matter for which scales can be captured at the various height levels. Nevertheless, in the tropics where most of the GNSS-RO data is located, a linear relationship between ensemble spread and variance in first guess (FG) departures can be seen at higher altitudes. Here, ensemble spread and variance in FG departures can be used to see the effect from adding GNSS-RO data which shows a reduction in their values.
For GNSS-R applications, knowledge of the scattering regime a particular measurement contains is often needed in order to optimally extract geophysical information. Most land and ocean surfaces give rise to diffusely scattered GNSS reflections. Nonetheless, several GNSS-R spaceborne observations have shown the presence of coherent reflections over non marine environments, like rivers and lakes, inundated areas, even covered by vegetation and flat agricultural regions. Coherent reflections are distinguished by their high reflected power, fine spatial resolution, and phase information. Their detection is thus quintessential for emerging applications, including global inland water mapping, floods detection, and wetlands monitoring where the scene inhomogeneity requires fine resolution. However, existing GNSS-R instruments have limitations in their ability to exploit the full potential of acquired measurements when the reflecting surface is coherent. Indeed, their standard products, i.e., Delay-Doppler Maps, are generated regardless of whether the reflection measurements are coherent or not and contain contributions from large collection areas. To reveal the presence of water in scenes and its variability, it is then necessary for future spacecraft receivers to be able to detect coherence in real-time and, once coherent reflections are identified, to treat them with a different processing than conventional one.
In this work, we revisit the entropy metric for measuring the coherence of the scattered GNSS-R signal, in view of its real time implementation. The metric, introduced in [1], relies on the generalized eigendecomposition (GED) of the correlation matrix of the complex zero-Doppler delay waveforms that are obtained after correlation of the complex GNSS-R signal with a replica of the PRN sequence. The computational complexity associated with the eigenvalues determination limits the possibility of real- time implementation of entropy. Fast entropy is then proposed, which overcomes this limitation since it no longer requires GED. In the new approach, the computation is simplified to a binary case of only two eigenvalues of the correlation matrix: the largest eigenvalue that is computed using the power method, and the second eigenvalue that is defined as the average of the remaining eigenvalues, derived using the matrix trace and the first eigenvalue. The fast entropy metric turns out to be particularly advantageous because it is easy to implement in parallel architectures. In the work we also investigate performance versus entropy and evaluate the impact of some tunable parameters of the algorithm, such as the number of waveforms and the number of samples of each waveform, on computational complexity as well as the ability to discriminate mixing of coherent and incoherent reflections occurring in realistic cases.
[1] I. M. Russo, M. di Bisceglie, C. Galdi, M. Lavalle and C. Zuffada, "Entropy-Based Coherence Metric for Land Applications of GNSS-R," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3125858.
Currently, two operational soil freeze and thaw (FT) products are available on a global scale and on a daily basis: SMOS and SMAP FT products. They both rely on passive microwave observations at L-band frequencies. The L-band is highly useful for many reasons. First, a narrow frequency band (1400 – 1427 MHz) is reserved for radioastronomy by the International Telecommunication Union (ITU). This band should be free of man-made radio interferences (RFI). Second, at L-band the permittivity contrast between free liquid water and ice is very large, resulting to large variations in observed emissivity due to soil freezing and thawing. Third, at L-band the wavelength is also relatively long (21 cm for 1400 MHz), which enables gathering of information beneath the soil skin, from the surface layer of approximately 5-10 cm thickness.
Both SMOS and SMAP satellites have already exceeded their three-year nominal mission lifetime (SMOS operations started in 2010 and SMAP 2015). SMOS-HR a candidate follow-up mission for SMOS is currently on phase A study. ESA will be launching a multi-frequency radiometer (i.e., CIMR) in the frame of the Copernicus program. CIMR will include an L-band radiometer, however with slightly poorer spatial resolution and fixed incidence angle (compared to SMOS). In addition, CIMR will not be launched and operative before 2026.
Recently, there have been several studies on using observations exploiting signals of opportunity for detection of soil freezing and thawing. Comite et al. (2020) and Rautiainen et al. (2021) demonstrated the capability of using TechnoDemoSat-1 (TDS-1) measurements for global scale freeze and thaw observations. Carreno-Luengo and Ruf (2021) showed the potential of CYGNSS observations for monitoring the annual freeze and thaw transition in the Andes Mountains.
HydroGNSS satellite was selected as the second mission in the ESA’s new line of research missions, called Scout missions. Research satellites on scout missions are targeted for demonstrating the capability of small satellites to deliver value-added science. HydroGNSS is a scientific demonstrator for GNSS Reflectometry (GNSS-R) for land applications. It uses the global navigation satellites as bistatic radar transmitters and continuously measures the scattered reflections from the Earth’s surface to perform geophysical measurements. The main objective of HydroGNSS mission is to measure four land application parameters closely related to the essential climate variables (ECVs). These parameters are Soil Moisture, Inundation, soil Freeze and Thaw, and Above Ground Biomass. In our presentation we concentrate on the soil freeze and thaw parameter.
We have analysed the TDS-1 Delay Doppler Maps (DDM), namely the calibrated peak power of the DDMs, to demonstrate the capability of using GNSS-R observations for soil freeze and thaw monitoring. The detection of freeze and thaw transitions is based on high contrast between free liquid water and ice, same as for L-band radiometry. The reflected power from frozen soil is lower than from thaw soil. Using the TDS-1 observations, a contrast of approximately 4 dB to almost 10 dB was found between frozen and thaw soils, depending on the target land cover type. The forested areas had significantly lower contrast than, e.g., low vegetated areas. A simple threshold-based method with predetermined frozen and thaw ground references was applied to one year of TDS-1 data. Due to limited amount of data provided by TDS-1 (technological demonstration mission the operations of which cycled between several payloads), spatial and temporal averaging was used – data were gridded to 75x75 km EASE-2 grid (polar projection) and a 30-day running mean was introduced. The results were compared with SMOS F/T data, and they generally had good consistency. Unfortunately, TDS-1 operations had some outages which meant it was not possible to achieve a good spatial coverage on a global scale during the soil freezing periods. The best coverage was during the melting period, which is more challenging for soil state sensing due to the overlying wet snow layer. HydroGNSS will not have such coverage limitations given its continuously operating GNSS-R payload. In our presentation, we will provide the methods for soil freeze and thaw monitoring applied to HydroGNSS and show the results obtained with TDS-1 data.
D. Comite, L. Cenci, A. Collider, and N. Pierdicca, “Monitoring freeze-thaw state by means of GNSS reflectometry: An analysis of TechDemoSat-1 data,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., vol. 13, pp. 2996 - 3005, May. 2020
K. Rautiainen, D. Comite, J. Cohen, E. Cardellach, M. Unwin and N. Pierdicca, "Freeze-Thaw Detection over High-Latitude Regions by Means of GNSS-R Data," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3125315.
H. Carreno-Luengo and C. S. Ruf, "Retrieving Freeze/Thaw Surface State from CYGNSS Measurements," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3120932.
N. Rodriguez-Fernandez, J., Anterrieu, E., Cabot, F., Boutin, J., Picard, G., Pellarin, T., ... & Kerr, Y. H. (2021, July). A follow-up for the soil moisture and ocean salinity mission. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 7831-7834). IEEE
Various publications have shown that accurate monitoring of sea levels can contribute to the understanding of climate change. In the coastal environment, precision sensors include coastal tide gauges, laser altimeters and Global Navigation Satellites Systems (GNSS) buoys. Coastal ground-based GNSS-Reflectometry (GNSS-R) sensors can also be used for reaching the accuracy allowed by the GNSS systems. This work is about altimetry measurements using carrier-phase GNSS-R, a technique that has been shown to be able to provide centimetric precision using a low cost setup located on the coast. In this problem, the path difference between the direct and reflected signals sensed by a receiving system is calculated, using the associated carrier-phase difference, for deriving the height between the reflecting water surface and the receiving system.
In the literature, we generally find three approaches for altimetry measurements using GNSS-R: the interferometric GNSS-R (iGNSS-R), the Interference Pattern Technique (IPT) and the conventional GNSS-R (cGNSS-R). In this work, the height between the reflecting surface and the receiving system is obtained, using the cGNSS-R approach, from the carrier-phase difference observations based on the maximum likelihood linear-circular regression estimator. Indeed, the phase difference is an angle which follows a linear model depending on the height of the receiving antenna. In the proposed estimator, the accuracy and temporal resolution of the height estimates can be improved by fusing the information obtained from the available satellite signals. Since the estimator is defined based on the maximum likelihood approach, the obtained accuracy is theoretically the highest that can be reached using GNSS-R phase data following the defined model for flat reflecting surfaces. A study of the theoretical accuracy of GNSS-R altimetry depending on the geometry of the satellite constellation and the signal to noise ratio has been conducted, defining also the amount of data required to reach the millimetric accuracy. It is shown through a theoretical study that for a classical GPS satellite constellation, it is possible to reach a millimetric accuracy for a measurement rate of 100Hz when the reflecting water surface is smooth.
A set of experiments have been conducted for validating the proposed approach using real GPS L1 data. The first experiment, carried over several days, aimed at simulating a reflection occurring on a perfectly flat surface using two GNSS antennas. A second experiment took place in a harbour basin open to the sea using an RHCP and an LHCP GNSS antennas. These experiments shown that millimetric accuracy can be achieved for a measurement rate of 50Hz.
Global Navigation Satellite System-Reflectometry (GNSS-R) is an innovative and rapidly developing approach to Earth Observation that makes use of signals of opportunity from Global Navigation Satellite Systems, which have been reflected off the Earth’s surface. GNSS-R data collected over ocean have shown sensitivity to several surface parameters, including ocean wind speed, which is one of the primary objectives of the NASA CyGNSS mission, a constellation of eight small satellites launched in 2016.
Following a number of updates in the calibration strategy, the CyGNSS v3.0 products were released in 2020. Using CyGNSS data collected above the ocean surface and geophysical reference data from ECMWF ERA-5 reanalysis model, an assessment of CyGNSS Level-1 (L1) calibration performance is presented. L1 data collected by the individual CyGNSS units are shown to be well inter-calibrated and remarkably stable over time, a significant improvement over previous versions. However, prominent geographical biases of Normalised Bistatic Radar Cross Section (NBRCS) estimates are found in the analysis. These appear to be linked to a number of residual dependencies including the originating GPS transmitter and bistatic geometry.
An initial approach aimed at mitigation of the observed biases is presented here, and following the introduction of this additional calibration step, an investigation of signal sensitivity to ocean and atmosphere variables other than wind speed is shown.
A number of geophysical variables theorised to be modulating forward-scattered GNSS power were investigated over a range of ocean wind speeds. Courtesy of its unprecedented catalogue size, the CyGNSS constellation allow for the first time a statistically satisfying analysis where the record can be segmented along multiple dimensions and geophysical effects individually separated. Other than surface wind speed, strong sensitivity is found to both significant wave height (SWH) and total precipitation. Once these effects have been isolated and removed, an investigation of sensitivity to sea surface temperature (SST) and sea surface salinity (SSS) is presented.
Finally a strategy for inversion for precipitation is presented with promising preliminary results that are found to be in good agreement with reanalysis data from ECMWF ERA-5.
This work emphasizes that GNSS-R instrument calibration is still work in progress and that once such contaminating effects are mitigated, realistic estimates of additional surface and atmosphere variables other than wind speed can also be derived using GNSS-R sensors.
Sea level rise and sea state variability due to climate change and global warming are major research topics in the scientific community. Ocean weather conditions considerably impact coastal areas, and wind speed (WS) and significant wave height (SWH) are usable parameters to monitor the sea state threats on the coasts. GNSS reflectometry (GNSS-R) has shown considerable promise as a remote sensing technique for ocean parameters estimation. Multiple studies have been conducted successfully in the recent two decades by using GNSS-R ground-based, airborne and spaceborne data to retrieve geophysical properties of the ocean surface.
The focus of this study is to investigate the Doppler shift of the reflected signal as observable to estimate the Doppler spread (DS) and determine its correlation with sea state changes employing GNSS-R airborne data in coastal areas. An additional aim is to study the possibility of using the Doppler spread as a metric for coherent GNSS reflectometry for applications such as precise altimetry and precise total electron content (TEC) estimates. An experiment was conducted from the 12th to the 19th of July 2019 along Opal Coast, between the cities of Calais and Boulogne-sur-Mer, France. The experiment consisted of multiple flights at an altitude of ~780m (a.m.s.l), and the direct and reflected signals were received by dual-polarized (Right-Handed and Left-Handed Circular Polarizations) antenna mounted on a gyrocopter.
A software receiver is used to process the direct and reflected signals from the right-hand channel. The resulting in-phase (I) and quadrature (Q) components (at 50 Hz rate) of the reflected signals are analyzed in the spectral domain every ten seconds to obtain the relative Doppler shift and power estimates. The coherence is established by analyzing the phase observations obtained from I and Q. The sensitivity of the reflected signal parameters and the sea state is determined by the correlation between the Doppler Spread with wind speed and significant wave height. The latter two were obtained from the atmospheric, land and oceanic climate model, ERA5, provided by the European Centre for Medium-Range Weather Forecasts (ECMWF).
Initial results have shown promising performance at a calm sea (WS: 2.9 m/s and SWH: 0.26 m) and grazing angles. Satellites with low elevations (E < 10°) present a Doppler Spread of 0.3 Hz and its Pearson correlations with respect to WS and SHW are 0.89 and 0.75, respectively. The performance is relatively poor for high elevation events (E > 30°). The DS increases up to 2.1 Hz and the correlation decrease to 0.55 and 0.42 respectively. Coherence conditions are still under study; however, preliminary phase analysis reveals coherent observations at events with elevations below 15° and sea state with a significant wave height of 0.26 m.
1. Introduction
Mobile laser scanning (MLS) systems capture three-dimensional (3D) point clouds with high flexibility and precision, thus are widely used for various applications. In recent years, there have been many scientific contributions aiming to process mobile laser scanning point clouds from urban scenes, focusing on segmentation [1-6], roads extraction and modelling [7-14], building extraction and reconstruction [15-19]. pole-like objects extraction and classification [4, 20-26]. trees [27-31], vehicle [3, 32-35], and other small objects, such as bike, bench, and pedestrian [36-39]. Geometric features are the most widely used in vehicle detection, such as the length, width, height, area and rectangularity. Yao, Hinz, and Stilla [34] adopted a rule-based hierarchical classifier to recognize vehicles using the five features of elongatedness, area, planarity, vertical and vertical range. Based on the same five features. Zhang et al. [33] proposed an object-based point cloud analysis. algorithm (OBPCA) to extract vehicles. It defined a rule to identify a vehicle object. Our method extract training data of vehicle using strict rules based on different segmentation method in MLS data.
Method
2. method
2.1 meaningful object detection
First, we expect to obtain the maximum benefits of the segment-based classification. The segmentation process is carried using Europe Distance Segmentation.
2.2 vehicle training data extraction
First, we expect to obtain the maximum benefits of the segment-based classification. The segmentation process is carried using Europe Distance Segmentation.
Vehicle is human designed rectangle object with the ranges of width (w_t), length (l_t), and height (h_t) of segments. In order to acquire candidate vehicle segments, first we extract the segments using the width length threshold of the corresponding Bbox, the threshold are set between possible minimum and maximum value for a vehicle,( w_t =1.8~4 m, l_t =2.5 ~18 m, h_t =1 ~ 4 m).
1. plane fitting
The vehicle is irregular cube with several plane. The side and top surface of small vehicle can be scanned, but the top surface of some large vehicles cannot be fully scanned because the height may be higher than sensor position. Therefore, the small vehicle must include two planes (figure 1), top surface is parallel to x-y, higher than ½ of Z-range and area > 1㎡. The side surface is parallel to x-z and length larger than 1/5 length of MBR. For large vehicles, one plane with length larger than 2m and parallel to x-z is necessary condition. In addition, the plane point density, width and length should have a certain range (ρ_plane > 0.1 * ρ_mean and A_plane > 0.25 m).
Figure 1 the diagram of plane extraction results, the red and blue rectangle represent the y-slice and z-slice respectively. the red and blue points represent vertical and horizontal extracted from RANSAC.
2. y-slice.
The point projected to x-z plane depict the side streamline shape and symmetric character. The features are calculated in three types y-slice: 0.1m thickness y-slice, side slice, and the whole y-slice respectively. side slice is consisted of ¼ number of y-slice close to x-z plane, and the whole y-slice generated by whole segment point points projected to x-z plane. We calculate the convex hull area and boundary area of three types y-slice, the {H_max}and {H_min} of these slice are calculated using 0.1 m interval of X-value, n_x is the number of interval in each slice, as figure 2 shows.
(1) Continuity analysis
There is possibility that the candidate segment contain multi objects, multi part or one part of the object, which is caused by under-, over-segmentation, and occlusion. The training segment must contain one integrated vehicle or part of vehicle, thus we select the integrated segment using the gap-based continuity analysis in the whole slice 0.1 m interval. we remove the candidates with larger than three consecutive interval null value (0.3 m gap) in whole slice to make sure only one vehicle in the segment. For large vehicles, the null value gap number is set to 10 intervals (1 m gap).
(2) symmetric character:
Mobile lidar side scanning results in the inability to accurately find the axis of symmetry. Thus, the area similarity are carried in different side slice, whole slice and each y- slice (Figure 2).
Figure 2. the slice similarity analysis using side slice.
First, the side slice is able to completely depict the vehicle side surface as a closed convex polygon. Thus the side slice boundary area is larger than 0.8 of convex hull area (Eq 1). Meanwhile the side slice and whole slice are greatly similar, which should satisfy the Eq 2. In addition, most of y-slices are similar to side slice, but some of inner y-slices are unclosed polygon because of occlusion. Thus the similarity between each y-slice and side slice are evaluated using the difference of convex hull area. When the number of y-slice whose difference area smaller than ±0.25*A_convex^(side slice) more than 0.6 (Eq 3)., the candidate will be reserved
(3) streamline in side slice:
There should be chassis character formed by {H_min}_, and streamline character formed by {H_max} (Figure 2). Chassis character are depicted as the {H_min}_(side slice) lower than 1/5 *h_Bbox larger than ¾*n_x . The difference of 〖{H_min^(n_x )}〗_(side slice) smaller than 0.1m is longer than ¼ of n_x. For streamline character, we use the difference of 〖{H_max^n}〗_(side slice) smaller than 0.2m is larger than ½*n_x.. 〖{H_max^(n_x )}〗_(side slice) larger than 〖1/5 *h〗_Bbox longer than ¾*n_x. In addition, we add the rules according to the windshield streamline shape to describe the difference between vehicles and low cube objects. There is rising trend of 〖{H_max^(n_x )}〗_(side slice) from forward or backward. The rising trend is defined by 〖diff({H_max^(n_x )}〗_(side slice)) keep positive or negative larger than 4 interval (0.4 m).
3. z-slice
In the case of full scanning, the convex shape of different z-slice are approximates rectangle in different size but similar edge direction. meanwhile there is slightly distance difference of each z-slice MBR edge which close to fully scanned vehicle surface. Thus we set the following rules for z-slice.
(1) z-slice area
For fully scanned vehicles, convex hull area is close to MBR area, the vehicle segment has more than 75% number of slices which radio between convex hull area and MBR area larger than 75%. In addition, the convex hull area of horizontal profile at ¼ height > area at ¾ height because the streamline structure.
(2) z-slice direction and distance difference:
The MBR direction difference is estimated by the angle between MBR edge of each z-slice and long side of whole MBR, as figure 3 shows. The z-slice with angle smaller than 20° is regarded as the similar direction as the whole MBR. The vehicle should have more than 75% of n_z similar direction z-slice.
Figure 3. The diagram of z-slice analysis.
Based on similar direction z-slice extracted , two edge with 20 angle to the whole MBR long side can be determined. Two edge distance is calculated by the vertices of two edge and fully scanned long side. The vehicle has more than 75% n_z which minimum edge distance < 1/4 MBR.
(3) L-shape detection
The vehicular chassis character is commonly indicated as L-shape using first ¼ z-slice points near to x-y plane, these z-slices are regarded as chassis-slice. The L-shape consists of two side, one of L-shape side is along x-axis, the other is perpendicular to x-axis close to one of the MBR width side. In this paper, the minimum distance 〖{d_min〗_(L-shape)^ } between the point cloud of chassis-slice and each L-shape side are calculated with 0.1 m interval alone each L-side respectively, n_l1 and n_l2 represent the number of interval in L-shape long side and short side respectively. Meanwhile, the distance differences between adjacent intervals are calculated to analyze the distance changing. For L-shape long side, the number of 0.1m interval with distance difference keep change within 0.1m and 〖{d_min〗_(L-shape)^ } < 1/5 of width larger than 75% interval number. For L-shape short side, the number of interval with distance difference keep change within 0.1m and 〖{d_min〗_(L-shape)^ } < 1m larger than 50% interval number.
Figure 4. The diagram of top view, which show the green points in side slice, red points in one of y-slices, and the red edge are used for L-shape detection.
2.3 Classification using PointNet++
In this study, we use PointNet++ [75] to classify the whole area using the training data extracted by strict rules. The object PointNet++ is an advanced version of PointNet and incorporates hierarchical feature learning by extracting features from multiple contextual scales.
3 Result and conclusion
we conclude that the presented method can well distinguish the vehicle class from the point cloud of complex urban environments. The prior knowledge further helps us to correct some certain mislabeled classes and improves the labeling accuracy at the expense of the proportion of the training samples.
1- Handling the Auxiliary Data Files (ADF) of open-source Level-1 and Level-2 EO data processors based on the inversion of the observations through a radiative transfer model, is already an issue when handling confidential data such as national VHR DEMs/DTMs, defence intelligence maps, surveys performed by research labs before they are publicly published, commercial information, etc. which has led us to the introduction of Trusted Execution Environment (TEE).
The update of L2 EO data processors based on Machine-Learning (ML) with training data sets, such as the outcome of yearly agriculture campaigns and Information on crops, manure and watering, is also an issue when the output is of national economic value -in some countries it is classified Confidentiel Défense. When one uses data from crowdsourcing, there is a need to conform to the GDPR. It is all the more complex when Federated Learning (FL) is involved because sharing the individual data is prohibited in these cases.
2- It has been understood that ESA would like to advance Earth-Observations by satellite (EO) analytics with ML by integrating EO data sets with large non-EO data sets made of confidential information such as census data, social survey data, Internet-of-Things (IOT) traffic, density of 5G mobile phone users .. for the delivery of Level-4 products, non-EO data adding context to EO L1, L2 and L3 products. Privacy-enhancing technologies (PETs) are called upon to fulfil this requirement.
3- Teaming with the French company COSMIAN (https://cosmian.com/) and with the support of the Estonian company Cybernetica (https://cyber.ee/), we reviewed cryptographic techniques such as - Asymmetric/Symmetric encryption which provides end to end security at rest and during transit, - Functional Encryption (FE) which authorizes computations on encrypted data and only reveals the results of these computations, - Fully Homomorphic Encryption (FHE) which processes encrypted data and provides encrypted results, - Secure Multi-Party Computation (MPC) which shares computation without revealing input data, and - Secure Enclaves which protect data and code at runtime against software attacks, using TTE.
These technics, which permit to manipulate confidential data, compute insights and execute algorithms without leaking the source data, are currently tested in relevant scenarios of protected non-EO data collection and processing together with EOs, on two platforms: - CipherCompute by COSMIAN, and sharemind, a MPC for secure client-server applications which is licenced by Cybernetica using the open source privacy-enhanced business process mapping software pleak (Privacy LEAKage) developed by the university of Tartu under a US DARPA (Defense Advanced Research Projects Agency) programme. Nota: an alternative to software-based techniques is the Intel Software Guard Extensions (SGX) TEE which is an extension of the instruction set of Intel processors which enables developing secure applications when even the host operating system is not trusted it is an extension of sharemind called hardware isolation (HI). The CipherCompute platform has 4 cryptographic cores, developed in the Rust language: FE, FHE, MPC, and secure enclaves (SE) similar to SGX.
4- This is a work-in-progress and we will review at LPS2022 the general needs and specifications, as well as expectations in terms of performances with the advantages and drawbacks of each of the PETs. To illustrate it, preliminary results shall be show for a use-case, i.e. maritime surveillance/ fisheries’ monitoring, based on a ML-processor for the detection anomalous behaviours.
On average, 10.1 named storms occur in hurricane season from June to November. These storms often develop into Hurricanes, which are among the most destructive natural disasters. Damaged houses, businesses, and infrastructure belong to the most seen consequences. Hurricane Katrina in 2005 resulted in 82 % of people along the Gulf Coast losing access to energy supply for several days. Roughly half of the outages occurring during hurricanes can be traced back to trees falling into the power lines. The severity of such events will likely exacerbate, resulting in a higher degree of damage.
Infrastructure networks such as railways and power lines are extremely vulnerable to storms, and utilities’ risk management is a major challenge. To manage natural hazards, diminish vulnerability, and reduce the pressure on the infrastructure network, reliable, condition-based vegetation management is essential.
A comprehensive dataset of the condition of vegetation adjacent to the networks can is available with satellite data from a growing number of operators. Satellites take pictures of the earth every few days and generate vast amounts of data. LiveEO has developed a data-agnostic, automated solution that ingests data from different earth observation satellite constellations and with different temporal, spatial, and spectral resolutions and combines multiple data sources for vegetation analysis, on a country scale.
Our proprietary machine learning algorithms identify vegetation close to overhead lines and its height using stereoscopic satellite imagery. The system determines the three-dimensional distance of vegetation from the conductor, its species, and its vitality. These factors populate a risk model that gives utilities the possibility to have a complete overview of the vegetation risk along their network and prioritize maintenance tasks which are automatically generated based on the risk assessment.
The satellite insights are delivered via web and mobile applications. Integrated work planning functionality in the web app enables vegetation managers to assign work orders to field workers, who access them via the mobile app. Through this end-to-end approach, satellite data directly triggers cutback and inspection tasks along the network. It allows utilities to implement condition-based risk management and thus helps them to make their grid more resilient against the increasingly severe weather conditions that come with climate change.
In our presentation, we would like to explain how we developed a highly scalable data-agnostic solution and overcome challenges in scalable data acquisition and processing. We would also like to show how our close inter-industry collaboration with utilities around the globe has been essential to developing a true end-to-end solution that generates business-ready insights that are actionable without geospatial expertise.
The Arctic region is a very remote and vulnerable ecosystem but also rich in natural resources, which have been exploited for many decades. Activity includes the extraction of oil and gas, and mineral resources such as bauxite, phosphate, copper, iron ore, gold, nickel, and diamonds. These ecosystems are particularly vulnerable to any industrial accident. The lack of infrastructure and remoteness of the region means it can take a considerable time to respond to a spill. The Arctic has short summers, low temperatures, and limited sunlight, so it can take decades for Arctic ecosystems to recover from anthropogenic pollution. Examples of the potential hazards when exploiting natural resources in such fragile environments and the detrimental impact on the polar ecosystem and communities are all too frequent. In the case of the oil and gas industry, spills caused by the failure of old pipelines are a very regular occurrence.
Regular monitoring of these activities is critical to ensure any incident is quickly identified and the impact is swiftly contained. Given the geographical isolation of these activities, particularly inaccessible Arctic and sub-Arctic areas, remote sensing is an obvious technology to underpin any effective monitoring solution. Increasing availability in the public domain, together with recent advances in resolution, suggest satellite imagery can play a key role in effectively monitoring oil spills and is the focus for this study.
The remote sensing of polar regions and the detection of terrestrial oil spills have both been studied previously, however, there has been little work to investigate the two in combination. The challenge is how to detect an oil spill if it is from an unknown incident or illegal activity such as discharge. Oil spill detection by applying image processing techniques to Earth Observation (EO) data has historically focused on marine pollution. Satellite-based Synthetic Aperture Radar (SAR), with its day/night and all-weather capability and wide coverage, has proven to be effective. Oil spill detection with remote sensing in terrestrial environments has received less attention due to the typically smaller regional scale of terrestrial oil spill contamination together with the overlapping spectral signatures of the impacted vegetation and soils. SAR has not proven to be very effective onshore because of the false positives and consequent ambiguities associated with interpretation, reflecting the complexity of land cover.
A number of studies have highlighted the potential of airborne hyperspectral sensors for oil spill detection either directly on bare sites with distinctive spectral signatures for hydrocarbon-bearing surfaces, with absorption bands identified in the short-wave infrared (SWIR) range at 1730 and 2300nm. However, unlike spaceborne sensors, these devices do not provide regular coverage over broad areas. A limited number of hyperspectral satellites have been launched to date, but have technical and practical constraints, particularly issues with the Signal-to-Noise Ratio (SNR), and the time and cost of data processing. The medium spatial resolution and long revisit times of most hyperspectral instruments to date also limit their use for identifying smaller incidents that often occur with high unpredictability.
No single sensor currently has all the characteristics required to detect the extent, impact and recovery from onshore oil spills. Combining technologies, both passive and active, and sensor platforms, from satellites to drone, will offer the most effective approach to monitoring oil contamination, particularly in vegetated areas. This study will look at the potential of combining medium spatial resolution imagery (Sentinel-2) for initial screening, with high spatial (WorldView-3) and high spectral (PRISMA) resolution data, both covering the key SWIR bands, for site specific analysis. The potential for deriving additional insight from SAR imagery (Sentinel-1 / Iceye) will also be examined for sites with clearly defined contamination issues.
The ESA feasibility study EO4Infrastructures was conceptualised for monitoring infrastructure by adapting innovative industrial capacity to the needs of railway operations in the context of the European railway network. In the case of Germany GAF AG, as an established EO service provider, completed engineering, product development and production tasks to provide interferometric deformation information tailored to the needs of DB Netze, the operating entity for railway infrastructure in Germany.
DB Netz AG, as the rail infrastructure company of Deutsche Bahn AG, is responsible for the majority of the rail network in Germany. It operates Europe’s largest rail infrastructure and maintains, modernises and advances the rail network. Earth observation techniques besides other forms of observations (e.g. drones, and helicopters) are used and further developed at DB in various research projects. Requirements for future use cases will be defined and validated.
All user requirements were derived by a multidisciplinary team of DB Netze composed by infrastructure managers, planners and life cycle mangers to reflect a maximum of practical expertise in real world operations of DB Netze.
Subsequent main categories were evaluated, specified and furthermore conceptualised as use cases for the project:
o Monitoring of Sound proof walls
o Bridge Monitoring
o Slope and embankment Monitoring
o Monitoring of slow driving points that are induced by ground movements
o Geologically induced Movements - Sub erosion in the Karst, Salt DOMES, Mining sites and tectonically active areas
o Ground movements due to groundwater fluctuations
o Long-term monitoring of climate changes
The presented project differs from other infrastructure projects in Germany that focussed basically on the utilisation of the national ground motion service, which is provided as the baseline service in the framework of Copernicus. It addresses the utilisation of services in different levels of detail or resolution integrating additional very high resolution information sources and their synergistic evaluation.
Medium resolution ground motion maps generated on Sentinel 1 PSI data stacks from 2016 to 2021 of North-West Germany (250 x 150 km) were analysed with support of validation layers such as Sentinel 2 images, Copernicus Land Cover (CLC), geological layers containing salt structures and tectonics, among others.
High resolution ground motion maps processed on TerraSAR X interferometric stacks from 2018 to 2021 were used for detailed analysis of those infrastructure elements which are requiring a higher point density and spatial resolution. Two VHR Pleiades pan-sharpened image mosaics from July 2018 and April 2020 with spatial resolution of 0.5 m served as validation source for the Hamburg-Altenwerder region covering an area of 53 km².
All results and data products were integrated by GAF AG and e-GEOS in a relevant portal and provided to experts of DB Netze.
In the overall context of the project the German case was validated according to a validation protocol and methodology established in the project consortium. The underlying approach and methodology will be discussed. The validation exercise itself was independently performed by the user DB Netz AG. Their final and concise validation of the provided information in correspondence to the initially derived user requirements will be discussed and presented.
Finally the usability and the benefit of the integration of the service and products into business processes of DB Netz AG will be discussed, regarding not only technical viability, but also the comparison with alternative solutions.
Even though being highly focussed on railway infrastructure, it shows that our approach is suited to be applied to various infrastructure elements. A perspective is given and potential solutions will be presented.
The most important infrastructural elements of the geodetic application of the Interferometric Synthetic Aperture Radar (InSAR) are the integrated benchmarks which are satellite technologies, besides the traditional geodetic technologies, thus they serve as benchmarks of the Global National Satellite System (GNSS) and InSAR.
The Satellite Geodetic Observatory (SGO) has already built a network of the passive corner reflectors in 2009 (SENGA) near the Hungarian GPS Geokinematic Reference Network. This infrastructure is attached by an electronic corner reflector (transponder) which is the first Sentinel-1 compatible active device in Hungary. We have been testing the transponder from July 2020 to October 2021. In our work we focused to the detection of the intensity of the emitted radar signal by the Sentinel-1 C-band satellite VV-polarisation sensor using GAMMA Remote Sensing Software with 6-day repeat cycle availability of satellite images in ascending and descending passes. Hence, we could monitor and compare of the pixel-intensity (on decibel scale) before and after the installation. The value of the pixel is increased around 15-20 dB and we had chance to compute the Radar Cross Section (RCS=31 dBm2) and collate it with existing researches. During the testing period the ECR was temporary placed to the rooftop of the SGO, but in November 2021 the device was relocated and set up for a stable permanent stand with exact calibration positions and different mounting points (simulated manual ground deformation) to operate as InSAR Persistent Scatterer (PS).
Therefore, we have feasibility to evaluate and validate the sub-pixel displacements, main lobe of phase, signal to clutter ratio (SCR) and impulse-response function (IPR) in coregistered, deramped and oversampled Sentinel-1 data stack knowing the precise phase centres and peak positions of the ECR. Furthermore, we have installed and perfectly aligned two double backflip corner reflectors as reference points to outspread the passive remote sensing network. We demonstrate this concept through a practical case study in Hungary.
This presentation will summarize the results of a Spanish project called Prometeo. In this project, we propose to perform a continuous deformation monitoring of infrastructures using Persistent Scatterer Interferometry (PSI). This technique offers three important characteristics. First, it is sensitive to small deformations, in the order of a few millimeters. Second, it can cover wide areas, in the order of several thousands of square kilometers. Third, it provides systematic monitoring over time. These three characteristics are key to providing a proactive deformation monitoring of several infrastructures at the same time.
Persistent Scatterer Interferometry provides deformation velocities and time series. An important aspect is represented by the quality indices associated with such products. In this work, we propose two types of indices. The first one is a “traffic-light” label to be associated with single time series. The second one is a more detailed index, which is associated with each date (i.e. SAR acquisition date) of a given PSI time series. These indices are derived using a redundant set of interferograms and are computed by checking the consistency of the unwrapped interferometric phases. These indices reflect the occurrence of phase unwrapping errors related to a given PSI point. They represent valuable input for the analysis and exploitation of the PSI results.
For many years, remote sensing techniques have been a useful tool to provide non-invasive information of the land surface. Differential SAR Interferometry (DInSAR) is a mature and fully commercially operative technique widely used to provide centimetric-millimetric surface ground displacement measurements, usually with a high density of measurement points over a wide area of study. Over the last few decades, it has become a particularly relevant technique in infrastructure and asset management for the study of surface deformations across the body of an infrastructure or asset, and/or the effects of a geohazard on the behaviour of the asset. DInSAR is the only cost-effective method that permits the analysis of most individual infrastructure assets across an entire city at the same time as well as the possibility to measure historical deformations from available archive imagery. In construction projects, the technique is useful to provide information on ground stability before construction, during the construction of the infrastructure to monitor its effects and in the post-construction stage to evaluate the ground stability once the works ended.
Atlas, Sixense’s interferometric processing chain, has been developed around the core software GAMMA to successfully detect and monitor ground motions such as subsidence, heave, infrastructure/asset stability, among others. It has been applied in geotechnical and structural monitoring projects linked to urban construction activities, with particular focus on tunnelling monitoring and to support infrastructure management in all stages of their life cycle. Taking advantage of Sixense’s experience in geotechnical and automatic surveying, it has been extensively used for measuring vertical ground and structure movements. Moreover, it is in continuous development, both for processing and post-processing activities, to efficiently extract characterized information of maximum benefit to end users by implementing different algorithms and AI methodologies over InSAR Big Data results to provide ready-to-use, actionable information.
With the forthcoming launch of a new generation of satellites allowing for a shorter revisit and, simultaneously, the development of computer power for automation of data processing and machine learning techniques, survey monitoring is at the eve of a fundamental transformation. Atlas, and all different DInSAR techniques, can nowadays be routinely performed, using globally available SAR datasets from satellites like ESA’s Copernicus Sentinel-1, and from other commercially available higher resolution SAR data. The scale and extends of the infrastructure and the deformation to be measured will determine the most suitable sensor to be used. In general, X-band sensors provide higher spatial resolution and higher geolocation accuracy compared to other SAR bands. These factors make X-band highly suitable for infrastructure applications. C-band sensors can be more useful for the study of linear infrastructures and those in non-urban or semi-urban areas as they better penetrate light vegetation cover.
In this presentation, ATLAS InSAR will be briefly presented focusing on the technical challenges and opportunities presented by the unprecedent spatial and temporal volume of InSAR measurements, which are only going to increase with new sensors to come, in the field of asset management. Different ATLAS InSAR real case studies derived on the framework of several engineering projects over the world will be presented. In this regard, mid-resolution C-band Sentinel-1 data and high-resolution X-band TerraSAR-X data will be compared and technical advantages and disadvantages for infrastructure management of both datasets will be pointed out. In particular, high resolution TerraSAR-X data will be analysed over structures as bridges and viaducts, airport premises, cities and single buildings, where high resolution sensors allow the spatial resolution and measurement accuracy commensurate with commonly required engineering scales, i.e., several measurement points over any element of infrastructure ensuring the ability to analyse the behaviour of deformations spatially across an asset, as well as in time. Other examples of large infrastructures and large-scale projects analysis will be shown using Sentinel-1 C-band sensor, where it gives the best results. Finally, some extra developments performed using different algorithms, including AI algorithms, will be presented to extract the maximum actionable information from Atlas InSAR results. The paper will conclude with an analysis of how a global approach can drastically reduce the risk and cost of urban construction and asset management.
With considerable impacts due to climate change in the Arctic a consistent record of the expanding local infrastructure is essential, in order to assess environmental impacts as well as potential infrastructures at risk. The recently published Sentinel 1/2 derived Arctic Coastal Human Impact dataset (SACHI) provided a first satellite-based record of arctic settlements and infrastructure for permafrost affected coasts (100 km buffer). This dataset combined results of two different classification approaches in order to handle the specific type of features and data scarcity. The pixel-based classification using Gradient Boosting Machines (GBM) and windowed semantic segmentation approach (U-Net convolutional neural network architecture) using the deep learning framework Keras with the Tensorflow backend. Three classes were included: linear transport infrastructure (roads and railways), buildings, and other impacted areas. In this work we present an updated classification of the DL component in order to better characterize settlements affected by coastal erosion in permafrost regions. This needs to be considered for further evolution of the Arctic Coastal Dynamics database as is addressed within the ESA Polar Science Cluster project EO4PAC (Earth Observation for Permafrost Affected Arctic Coasts). For this purpose, the reference data has been revised based on high resolution Google Hybrid Maps. All reference Granules are located in sporadic to continuous permafrost and cover areas of Alaska, Canada, Greenland, Svalbard and Siberia. The classification is based on 10 atmospheric corrected Sentinel-2 bands, as well as NDBI, NDVI and NDWI, all processed to 10 m spatial resolution based on Dsen2 (super-resolution). For each granule three acquisitions were used in order to consider possibly undetected clouds. The training granules were split into tiles of 512 x 512 pixels. Two updated classification versions were produced: one based on the same 459 training tiles as the SACHI product, with the same number of classes. The second version includes an additional class of airstrips and is based on 169 tiles. For each version the dataset was further split into training (80%) and validation (20%) sets used during the training process. A suitable ratio of tiles with and without infrastructure reference and with airstrips respectively proofed to be crucial to achieve the best results. Independent validation is eventually performed using maps based on drone observations and high resolution satellite data across the Arctic.
Signs of advancing climate change are noticeable all over the world. Especially in Germany, increasing extreme weather events, such as heavy rain and extreme heat, are causing a strain on infrastructures, necessitating regular monitoring of their deformation to prevent irreparable damages.
Based on this, dams of reservoirs are monitored regularly. While large gravity dams are equipped with plumb line measurement, most of them are observed by trigonometric measurement only once or twice a year. The use of satellite-based motion information can help to increase the monitoring intervals of traditional terrestrial measurements and supplement them with up-to-date monitoring information. Current satellite data from the Copernicus Sentinel-1 satellite, for example, provide a measurement every 6 days for Germany. Technical advances in differential Synthetic Aperture Radar interferometry, in particular the Persistent Scatterer Interferometry (PSI) technique, allow the identification and characterization of millimeter deformation of infrastructures (Ferretti 2001), and thus the regular and precise monitoring of dams.
Of fundamental importance for most PSI studies are the European C-band satellites. A series of satellites that have been running since the early 1990s and provide ideal conditions for the application of these techniques. This allows retrospective analysis of phenomena that have been active for some time. The technique was developed with ERS-1/2 data and has since been further developed using ERS, Envisat, RADARSAT, and data of other frequencies such as TerraSAR-X data. The launch of Sentinel-1 in 2014 resulted in a continuation of regular C-band satellite imagery for use in PSI techniques. For infrastructure monitoring purposes, X-Band data are equally preferred (Adam 2013). They usually allow the detection of more Point Scatterers due to their higher resolution, which is suitable for monitoring small scales infrastructures. However, the free and open availability of C-Band data from the Copernicus program at high repetition cycles makes these data a real game-changer for infrastructure monitoring.
Persistent scatterer interferometry is already being used in a variety of applications, not the least for the production of nationwide deformation maps, which facilitates access to such data. For Germany, the Federal Institute for Geosciences and Natural resources (Bundesanstalt für Geowissenschaften und Rohstoffe - BGR) developed the German Ground Motion Service (BodenBewegungsDienst Deutschland - BBD) to provide results on a nationwide PSI analysis in a web-based geoinformation system. However, in the case of BBD, the deformation data is updated only once a year, which does not allow continuous and up-to-date monitoring.
In this study, we process Sentinel-1 data acquired from 2015 to 2020 in ascending direction for the monitoring of the Möhne gravity dam, in North-Rhine-Westphalia, Germany. In particular, we investigate the influence of the acquisition cycle on the observed deformation. For this purpose, we considered the full, half, and a quarter of the temporal data availability for data processing, to find a trade-off between the number of detected PSs for deformation estimation and keep a high monitoring accuracy. Additionally, we transformed the observed Line-of-Sight deformation for each PS into the radial deformation direction to the dam, as it is the direction that is highly under strain with changing water levels and temperatures in the reservoir. For this purpose, we used a very accurate CAD Model of the dam.
We show a very high concordance between the derived PSI radial deformation estimates and terrestrial observation of dam deformations (RMSE < 2 mm for most detected PS points) and a high correlation between dam deformation and water level in the reservoir. We also found out that keeping the full temporal resolution of satellite acquisitions reduces the number of detected PSs by a factor of 2,5 while the deformation accuracy stayed high with an RMSE of max. 2 mm between in-situ plumb measurement and PSI measurements for all acquisition cycles scenarios, showing that a reduction of the observation cycle could be preferred in cases where the density of detected PSs on the infrastructure is important.
The outcomes of this study are relevant for further automation of dam infrastructure monitoring, as they both confirm the high accuracy of deformation estimates on dam infrastructure with the PSI technique and lay the path for further optimization of the acquisition cycle, depending on the monitoring goal. For example, the high temporal resolution would be preferred for monitoring the strain of sudden extreme weather events on a different part of the infrastructure, and a less dense observation cycle could be favored for monitoring the general seasonal deformation of the dams.
References
Adam, N., Gonzalez, F.R., Parizzi, A. & R. Brcic (2013): Wide area Persistent Scatterer Interferometry: Current developments, algorithms and examples. 2013 IEEE International Geoscience and Remote Sensing Symposium – IGARSS, 1857-1860.
BGR (2019): BodenBewegungsdienst Deutschland – BBD, https://www.bgr.bund.de/DE/Themen/GG_Fernerkundung/BodenBewegungsdienst_Deutschland/bodenbewegungsdienst_deutschland_node.thml. (Last access: 11/2021).
Ferretti, A., Prati, C. & Rocca, F. (2001): Permanent Scatterers in SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing 39, 1, 8-20.
In 2018 ESA published a tender for developing EO services to support the resilience and sustainability of critical infrastructures and to make such infrastructures “greener”.
Planetek Italia, as prime contractor, together with ENEA and INGV as sub-contractors selected the “integrated water cycle management” as object for their feasibility study. The project, whose name is SCIRES - Supporting Critical Infrastructures Resilience from Space, was selected by ESA and subsequently passed with success the feasibility study: the project is starting in 2021 the demonstration phase.
The integrated water cycle management relies on different infrastructures: water basin and impoundment, transport and distribution pipelines, wastewater collection, and as interconnected infrastructures, hydroelectric plants and power distribution networks on which the infrastructures rely on for energy supply. Three innovative EO services, that will be eventually integrated into the Planetek’s Rheticus platform are object of the study: (1) the Ground Stability service (2) the pipeline monitoring service and (3) the Assessment of the risks related to the interconnected Critical Infrastructures:
1. The Ground Stability service
The service is focused on the monitoring by means of mean ground velocity and time series of deformation (Persistent Scatterers) computed by Rheticus®’ services, integrated with in situ geological and geomorphological data. The objective of the service is to provide a comprehensive analysis of horizontal and vertical surface and near sub-surface deformations and dislocations that affect hydraulic works, such as water impoundments and distribution networks that could be damaged by hazardous events. In particular, landslides (LS), deep-seated gravitational slope deformation (DGSD), and differential ground subsidence (GS) phenomena are taken into account. Both LS, DGSD and GS can damage or threaten the functionalities of the infrastructures itself (e.g. excessive pipeline deformations, differential subsidence close or below hydraulic works), especially if triggered by heavy rains or earthquakes.
2. Pipeline monitoring service by using EO data, IoT (in-situ sensors) and AI technique
The service aims at identifying precursor phenomena related to the breaking of pipeline, responsible for the losses of water, and subsequent water infiltration in the ground and potentially causing damages to the surrounding infrastructures. The goal is to prioritise the maintenance of the pipelines, basing the priority on the computed vulnerability. Damages to pipelines may be caused by several factors, coming from the environment (weather, subsidence causing stress on the ground…) or from the working condition of the network (pressure and velocity of the fluids and relevant gradients…).
3. Assessment of the risks related to the interconnected Critical Infrastructures
The assessment of the risks related to the interconnected CI is based on the CIPCast Platform - an EISAC.it tool – which has been conceived for supporting risk analysis and emergency management in the case of natural hazardous events.
Planetek developed in the last years the Pipeline monitoring service, that has been called Rheticus® Network Alert. It is a turnkey vertical web service for the continuous monitoring of instability phenomena affecting pipeline networks (water and sewage) in urban areas, caused by ground displacement. With the SCIRES-DP project, the aim is the improvement of the performance of the existing service, developing a risk model that allows the highlighting of the pipelines that need to be inspected with different level of priority. The current Rheticus® Network Alert indicates locations of concern and lets operators to act upon the information, simplifying maintenance activities and prioritizing inspection. Thus, the service allows an “a priori” approach, helping to highlight problems before they become critical. As a result, operators better manage their financial resources and reduce service disruptions and/or threats for people. The information is updated and delivered to utility companies with extremely intuitive Business Intelligence tools to add dynamic analysis and new features to their planning, management and maintenance activities. The clients use the service to select all the pipelines that need to be inspected, and produce a report to support the inspections, that contains the detailed information on the segments of the network to be inspected.
The customers of Rheticus® Network Alert are the water and sewer maintenance divisions, and in the project two of the most important Italian operators are involved.
As said, Rheticus® Network Alert provides updated levels of concern on each segment of an underground pipeline network, based on measurements of displacement of the network itself as well as of the nearby areas. The aim of this project is to further improve the performance of the pipelines classification algorithm implemented in the current version of the service. We propose both the improvement in the identification of interesting Persistent Scatterers (PS), characterized by particular displacement trend over time, and the interpretation of the ground displacement phenomena through the definition of different clusters of PS. These goals will be accomplished through AI methods. During the development of the project some of the most important Artificial Intelligence (AI) methods to study surface deformation using Persistent Scatterers Interferometry (PSI) will be implemented.
SCIRES is a demonstration project, whose objective is the validation of the SCIRES services in an operational environment. At the end of the project demonstration, we expect to have operational services in place, delivering the innovative maps/information to the engaged users as well as new customers. The pilot, object of the demonstration, has the goal for the pipeline monitoring service, to test the implemented AI algorithms in a real scenario defined by the utilities participating to the project.
The use of digital twins in rail assets and systems management could make the railways more reliable, competitive and efficient. They can enable the railways to deliver a high-quality service that meets the demand of users. The first step toward the creation of a railway digital twin is to unify the georeferenced representation of the railway network, integrating the appropriate information in order to have an accurate technical description of the infrastructure.
The construction of new lines, infrastructure management, and maintenance planning on existing routes requires a comprehensive knowledge of railway assets. These days it is quite common to see each local railway department being responsible for managing their own assets knowledge base, but assets information is often approximate or even missing. To make the scenario even more complex, the use of paper support in this sector is still quite common, representing a blockage for a more efficient data sharing, and increasing the risk of data duplication across departments.
Digitalization of the railway infrastructure represents the solution to these problems, allowing an unambiguous, informed centralized assets management capable to take into account the needs of all the operational departments.
Creating a digital georeferenced inventory of a whole railway network however is a very demanding process, usually performed using a combination of aerial survey and on ground advanced field measurements taken from diagnostic vehicles, that can take long time before being completed. The output is an extremely detailed and high precision cartography, but the drawback is that this product may be difficult to update, given the time required for its generation and the need for coordination with operational train circulation to perform the onsite survey.
Nowadays, the availability of medium (e.g. Sentinel-2) to very high resolution (e.g. Pleiades Neo, WorldView) satellite optical images represents an attractive alternative to conventional railway asset mapping based on in situ measurements, giving the opportunity to perform cost effective periodic updates of the assets inventory for the whole network.
In this paper, we will present an approach based on the use of Convolutional Neural Network (CNN) for the automatic extraction and classification of the georeferenced geometry of railway assets from high resolution multispectral images. In particular, based on the railway operators' requirements, the following assets have been identified as the main targets:
- railroad tracks,
- switches,
- level crossings,
- buildings adjacent to the railway lines
The study takes advantage of the availability of a set of very high-resolution aerial images that have been manually analyzed to extract the railway assets of interest. These data have been used as training set for the CNN.
The original very high-resolution aerial images were artificially under-sampled at different resolution to simulate an image taken from a satellite. The most suitable neural network architecture was investigated and trained to provide a raster segmentation output, and proper geometry reconstruction algorithms were developed in order to obtain a vector segmentation of the input image and a vector description of the railroad assets.
One of the main limitations on the use of satellite images to map railway assets is represented by their resolution: even if in the recent years we have seen a dramatic increase in the number of very very high-resolution mission (VVHR), they still cannot compete with the resolution of images taken from a drone. The trained model performances have been analyzed as a function of different resolution of the input image used, in order to assess potential and limitation of the developed approach in different scenario, and to derive the best configuration for its operational use to frequently update the railway asset database.
Nearly three billion litres of drinking water are lost through leaking pipes across the UK every day. Leaking pipes do not only waste valuable water but can also cause related problems such as loss of supply, flooding and discolouration. UK water companies are regulated by Ofwat and must meet targets for reducing leakage or face financial penalties. Similar targets exist for sewage pipe failures, where there is additional concern about the environmental effects of leaks.
One potential cause of leakage is ground movement. Water pipes can be stressed by movement of the ground due to subsidence, heave or even landslides and may eventually burst. If a pipe can be repaired, strengthened or modified prior to breaking, this can save financial and reputational damage. However, water companies have a limited budget for preventative maintenance: Ofwat guidelines suggest 1% of the network should be replaced each year. Water companies therefore need to target their maintenance efforts and replacement programs carefully.
Satellite InSAR allows for remote monitoring of ground movements from space. SatSense routinely process all Sentinel-1 data covering the UK, providing an up-to-date, UK-wide ground movement product. The DRIPIN project is a feasibility study funded through the ESA Business Applications programme, and awarded to SatSense, designed to investigate whether satellite InSAR could provide useful information to water companies about ground movement risks to their pipe networks.
SatSense worked with two UK water companies to understand their risk modelling strategies and investigate how InSAR could be usefully included within them. SatSense developed a series of risk metrics, which highlight different risks based on the history of ground movement at any given pipe. These metrics were applied to pipes which are known to have burst and those which have not experienced any bursts. SatSense used receiver operating characteristic (ROC) curves to investigate how effective each metric is at predicting whether a pipe will burst or not. SatSense found that some of these risk metrics were able to distinguish between pipes which burst and those which did not, indicating that InSAR is able to contribute to pipe failure risk modelling. Furthermore, combining a number of these risk metrics and using machine learning techniques resulted in even more effective predictions.
SatSense provided one of the water companies with risk metric data for every pipe in a test area so that the SatSense data could be included in their current risk models. The inclusion of the SatSense data improved the performance of their risk models by 23 – 46%, depending on the number of pipes used in the evaluation. More accurate burst prediction enables water companies to repair at-risk pipes before they fail and save both financial cost and reputational damage.
We as humanity are facing more and more challenges on our planet:
Environmental pollution, regional overpopulation, food production, changing and increasingly challenging climate conditions leading to increased risk of natural disasters such as flood, earthquakes, subsidence, tornados etc.
All of the aforementioned aspects have a direct influence on various infrastructure elements which are fundamental to our societal living conditions.
What is the future in downstreaming services especially for infrastructure monitoring? How can satellite EO information help to deliver a significant contribution? How can AI contribute to this challenge?
There are more and more satellite constellations (public as well as commercial), delivering an increasing amount of data with higher and higher resolution with increasingly faster revisit times.
Especially resolution seems to play an important role when it comes to infrastructure such as railway tracks, roads, bridges, pipelines or buildings.
Along with the data at hand, AI/processing technologies are another key driver as they are capable of transforming large quantities of pixel data into meaningful (and value-adding) products.
Data and AI/processing technologies will allow more and more accurate and fine grained analyses and estimates which will lead to a turn-over point where more possibilities and cheaper data products are emerging.
Still, from a business perspective, the clear path of monetization from supply to demand in order to generate continuous revenues is yet to be established in many business areas.
However, this seems more a question of “when” rather than “if”.
As satellite EO data will become increasingly cheaper in price, new space-based solutions for many different applications become more and more profitable.
This is especially true in areas where continuous monitoring plays an important role – the satellites are flying anyways.
In consequence, existing approaches which are based on airborne/drone-based solution will then be replace with space-borne solutions.
In that regard we are close to a turning point.
However, although many use cases may be addressed with readily available EO data, many others still remain to be tackled with tailored solutions.
Exactly for those cases, interdisciplinary know-how needs to be brought together to create optimal end-to-end solutions – from the optical instrument in space over the satellite platform and ground segment towards the AI empowered data center with its web based UI access and standard conform services.
OHB as one of the three leading European aerospace companies is able to deliver these kind of end-to-end solutions.
For years and decades OHB is working on Earth Observation solutions which include satellites, airborne platforms, on-board processing, data links, data processing, AI methods – in short, everything downstreaming services are made of.
OHB is strongly involved in institutional programs (e.g., for ESA and the German government).
Furthermore, OHB is also engaged in commercial infrastructure monitoring projects, for instance for the oil and gas industry.
In this market, one OHB success story is Lynx, a commercial space-based monitoring system for methane and oil pipelines.
Lynx finds itself in the starting blocks ready to fill current technology gaps: continuous and complete monitoring of gas and oil infrastructures of a whole continent by employing advanced multi-spectral instruments.
In addition to the oil and gas industry, OHB sees municipalities and there needs as an important commercial market for infrastructure monitoring applications.
Here, a large spectrum of applications lie in the focus of the customers which include topics such as urban heat islands, urban vegetation, forest and flood monitoring as well as disaster management in general.
CAMEO (Corridor and Asset Monitoring using Earth Observation) aims to boost the understanding and integration of satellite Earth Observation (EO) services by companies and agencies managing pipeline and energy transmission corridors, including underground electricity cables.
This is achieved by demonstrating the benefits of the EO based services in collaboration with asset managers and in-sector providers that do not traditionally use EO services. CAMEO is executing demonstration pilots where EO data is combined with traditional on-ground data and cutting-edge data processing and analytics techniques enabling improved monitoring insights.
The objectives of CAMEO are two-folds. First of all to show the added value of EO data to stakeholders in the corridor and asset monitoring domain. This was addressed by first gaining a deep understanding of the information needed by the end users and their working processes, and subsequently showcasing information services to them. The demonstration services of CAMEO cover a diversity of environments in which the stakeholders operate, with three broad categories of services:
• Threat assessment, e.g. third-party interference, encroachment
• Structural integrity, e.g. surface deformation, leak occurrence
• Environmental and geo-hazards, e.g. flooding, wildfire, landslides, vegetation change
The second objective of CAMEO is to implement the services using a “virtual platform” concept, where distributed sources of EO and non-EO data are integrated regardless of where geospatial data is hosted. EO service providers implement services in scalable cloud computing environments with information products combined with other data sources to deliver information to users. In-sector providers or end-users may process the information provided using their own algorithms thus turning the data into information with operational value. The In-sector providers play a crucial role in the solution as they can translate the end-user priorities and requirements and utilize the EO-based services.
In this presentation, we will present the CAMEO project in general, the EO services that are offered, as well as the Virtual Platform. In addition we will showcase the results of some of the demonstration use cases to illustrate the added value of the CAMEO platform.
The CAMEO project is led by Science & Technology Corporation (Norway and The Netherlands) with partners Orbital Eye (NL), Hatfield (Canada), BGC Engineering (Canada), and Sensar (NL). The two-year project is part of ESA’s "Expand Demand" initiatives with a focus on the Security sector.
Traffic infrastructure plays a vital role in today's society. For example, the freight transport and logistics sector has a turnover of more than 180 billion euros a year. It is estimated that almost 10 billion euros per year are required to maintain traffic infrastructure in 15 years to 2030. For security and economic reasons, it is essential to monitor traffic infrastructure, including roads, railways, and bridges, thereby ensuring the traffic flow. Assessing the health of traffic infrastructure with conventional geodetic approaches like levelling or GNSS observations is time-consuming, costly, and limited to sparse locations. In contrast, Earth observation data from satellite missions provide unique opportunities for spatially wide-scale and temporally high-resolution monitoring. For example, the German Federal Institute for Geosciences and Natural Resources provides a nation-wide ground displacement map derived from Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) for the time span from 2014 to 2019. This map allows a unique insight into the deformation at a large scale. However, it is limited to a given time period and is not optimized to provide the highest possible point density at traffic infrastructure. To help decision-makers quickly respond to suddenly occurring local deformation at traffic infrastructure, adapted InSAR time series methods are required, which provide spatially dense and constantly updated displacement information.
In the scope of our project SAR4Infra, joint forces from Leibniz University Hannover, German Research Centre for Geosciences (GFZ) Potsdam and authorities from Schleswig-Holstein develop an automatic InSAR processing chain to generate a risk displacement map for traffic infrastructure in Schleswig-Holstein, northern Germany, to be regularly used by local authorities in the future. This setting defines the following boundary constraints for our development: the usage of cost-free, easy to access data as well as a software development embedded in a continuous integration system that runs independently from any commercial software and produces regularly easy-to-access displacement maps. Based on these constraints, the outcome of SAR4Infra will rely on the freely available SAR data from Sentinel-1 mission of ESA's Copernicus programme. Finally, the processing chain will be implemented in the cloud computing environment CODE-DE to ensure the accessibility by the authorities.
A common limitation to InSAR time series analysis is the availability and exploitation of all reliable pixels in the SAR images. Therefore, we investigate the combination of different time series techniques to obtain a dense distribution of pixels on the traffic infrastructure and the full deformation signal. We classify the pixels based on their backscattering behaviour over time into reliable pixels and noise. The reliable pixels consist of persistent and distributed scatterers, which are differentiated by the analysis of statistically homogeneous pixels. We improve the signal-to-noise ratio of distributed scatterers by phase-linking and subsequently, process both persistent and distributed scatterers jointly in a persistent scatterers time series approach.
The short repetition time of six days from Sentinel-1 has the advantage that the deformation signal is sampled densely in time. However, the increasing amount of Sentinel-1 images also increases the computational burden of conventional InSAR time series methods. By relying on the cloud computation environment CODE-DE, an easy access to the whole archive of Sentinel-1 data over Germany is accomplished, as well as the required computational power for high-performance processing is ensured. Thereby, neither download nor processing on local user machines is necessary, which usually is a bottleneck in InSAR processing. To decrease the computational burden, one of the tasks of our project is estimating the deformation parameters sequentially once a new SAR image is acquired without re-estimating the whole time series. We will present the current status and prospects of the SAR4Infra project and highlight the challenges and opportunities of Sentinel-1 InSAR for traffic infrastructure monitoring.
The continuously increasing availability of EO data and derived products, in particular provided by the Copernicus Sentinel missions, and the increasing needs of the industry and public sector create the background for a complete exploitation of the EO Services with the consequent opportunity to contribute the reduction of the existing gaps in the infrastructure management sector.
The Copernicus Land Monitoring Service (CLMS) already makes available a set of EO derived products, useful for the monitoring of the European infrastructures, including the European Ground Motion Service EGMS, started in 2021. The combined use of these assets with high frequency commercial products can greatly improve the quantity and quality of information available to the European infrastructure monitoring systems.
The goal of the project presented in this paper is to demonstrate that on top of the baseline services provided by Copernicus, other valuable commercial EO derived products with higher update frequencies and specific in-situ data, can provide a next level of information improving the benefits for the End Users.
Although focussed on the railway infrastructure, the followed approach is valid also for any other linear infrastructure, like transportation, electric power lines, pipelines etc. Managing all these infrastructures implies facing similar problems, even if under different regulations. The proposed solutions, in term of product portfolio and system monitoring platform, are quite general and flexible, easy to integrate and to extend with new products, in order to provide infrastructure managers with valid monitoring instruments.
The project team is composed by EO services providers and End Users. The areas of investigation cover three European countries, France, Italy and Germany, involving SNCF (owner and main manager of the French national rail network) and RFI (public Italian railway infrastructure manager) as project partners and DB-Netze (public German railway company) as project collaborator. The EO services providers partners are TRE Altamira (for France), e-GEOS (for Italy) and GAF (for Germany).
The strong engagement of the End Users was essential during the requirements collection. During this activity, a set of user needs were identified such as monitoring of bridges, building monitoring of reserved areas, earthworks monitoring and others. The EO services providers linked each of these needs to one or more value added EO product such as medium and high-resolution ground motion maps, building monitoring product and others.
The Demonstration phase aims to move the implemented products into operational with a strong collaboration between end users and EO products providers. To this end, once the value added EO products have been generated over the areas of investigation, they will be put at disposal of the end users through the AWARE (Agile Monitoring of Assets and Resources) platform, designed to provide access to a wide variety of users to the Ground Motion, as well as to other EO and non-EO products.
The outcomes of the Italian demonstration carried out by RFI will also be presented by showing how specific EO derived products can effectively support the end user for the monitoring of areas adjacent to the railway tracks.
The concerned products will be:
i) Building and Vegetation Encroachment Monitoring;
ii) Hydrogeological Instability Monitoring with Medium and HR Ground Motion Maps;
iii) Flooding risk mitigation with Advanced Flooding Models and Map.
Sibling based time-series InSAR methods are successful at extracting deformation signals from long time-series, but challenges arise when the scattering properties of these siblings change in relation to each other, and the pixel of interest. RapidSAR (Spaans & Hooper 2016) uses amplitude statistics to determine sets of Statistical Homogeneous Pixels (SHPs), or siblings, for each pixel. These siblings are used to estimate coherence with higher resolution than standard boxcar approaches, where smearing of features can cause coherent, isolated scatterers to be ignored. Adaptive multilooking techniques also make use of sibling-based time series InSAR algorithms to reduce noise and to preserve features. The set of siblings remains the same until the user decides to re-select them, which, for real-time monitoring applications, is not ideal, as they may change over time. Here, we aim to improve sibling-based time series methods to ensure valid siblings throughout the time series by detecting when the siblings are no longer valid.
Two main scenarios exist for when siblings may become invalid: (1) the scattering characteristics of some of the siblings of a pixel change (for example, they lose coherence), (2) the scattering characteristics of the pixel of interest itself changes. The first scenario might cause an apparent decrease in coherence even though the central pixel’s coherence is unchanged, leading to the exclusion of the pixel for part of the time series. The second scenario might mean that the coherence estimation of that pixel remains essentially the same, even though the pixel’s coherence could have decreased significantly and whose phase should no longer be interpreted. To ensure the coherence estimation is accurate, we must be certain that each set of siblings is valid for each interferogram.
Cases where the coherence might decrease because of seasonal variation or farming practices may recover, allowing the siblings to still be valid later in the time series, but for areas undergoing anthropogenic changes such as construction, the siblings are unlikely to recover. Our work focuses on methods to determine reliably when siblings become invalid and need re-estimating by taking advantage of the evolution of phase, amplitude, and coherence behaviours through time.
We validate our method using a case study of urban and rural areas in the UK containing both natural and anthropogenic processes that can make time series InSAR challenging.
During the last decade, Time-Series Interferometric Synthetic Aperture Radar (TInSAR) has been emerged as a useful and powerful technique for deformation monitoring in urban areas and infrastructures. TInSAR datasets include millions of points (reflections or persistent scatterers, called PS) in urban areas and for each PS the deformation timeseries is available. For hazard assessment and for identification of peculiar deformation patterns, it is needed to classify the points based on their kinematic behavior over time. Analysis of such large volume of data is sometimes cumbersome and requires automatic and smart methods. Recently, different methodologies have been developed in InSAR literature for such classification based on statistical testing methods or recently based on smart algorithms such as deep learning. Regarding the statistical methodologies, there are two challenges or unanswered questions:
1) Testing Strategy: the statistical classification methodologies are based on the concept of statistical hypothesis testing. Most of the introduced methods exploit the standard testing strategies that are routinely used in geodetic studies (for example B-method of testing which is popular in geodetic network design and analysis). However the optimality of such methods for InSAR timeseries classification has not been proven yet. The totally different TInSAR data structure with respect to geodetic data may require alternative testing strategy. Here in this study we address on different testing strategies and different ways of formulating the null and alternative hypotheses in the statistical methods, and also on the selection of testing parameters (e.g., false alarm rate and testing power). The effect of these parameters are analysed and demonstrated on both synthetic and real datasets, resulting in an improved strategy for PS classification and identification of deformation patterns.
2) Effect of noise structure: The role of spatially or temporally correlated noise structure in the TInSAR data on the efficiency of classification methods is not clear. Most of the methods are based on simplified assumption of uncorrelated noise structure in the time domain. This assumption can results in misclassification or misidentification of deformation patterns. For example, smooth (or temporally correlated) behavior of deformation timeseries induced by spatio-temporal filtering during the TInSAR processing may be mistakenly interpreted as smooth deformation behavior, resulting in a misclassification. Here in this study, we show and quantify the effect of wrong assumptions about the noise model on the performance of the statistical classification methods. We also address on how to define a proper noise model in order to achieve an optimal classification.
By addressing the above two questions, we propose an optimal hypothesis testing strategy for PS-kinematic-behavior classification in urban areas. The performance of the method is demonstrated on the case study of the mega city of Tehran, and the deformation pattern of some identified hazardous points (e.g., sinkhole-prone areas) are analysed and discussed.
Information on ground motion is important for construction, coastal protection and management of underground infrastructure such as water and wastewater pipelines. Until recently, ground motion surveying was a time-consuming process involving acquisition of manual measurements over several years. With the availability of free Sentinel-1 radar imagery since 2014, this information may be derived from satellite data at a reasonable cost.
The combination of satellite derived ground deformation and geotechnical information has been explored by a Danish consortium made of Geopartner Inspections, GEO and DTU Space. The project aim is to develop specialized tools for presentation and integration of this information dedicated to end-users in the climate adaptation, utility, and construction sectors.
A new 3D voxel-based model has been developed focusing on subsurface geotechnical and geological data, modelling methods and model calculations. A web-based solution has been produced for visualisation of satellite-based ground deformation information and data are distributed as web services. New visualization tools have been developed in the GeoAtlas Live platform with the spatial implementation of advanced geotechnical parameters.
Prototype risk maps considering geological data and PSInSAR displacements has been developed. analyzing the agreement between the expected deformation from the geological model and the observed deformation from the PSInSAR results.
InSARinSub exhibits results and models as web-services or through an API-based data environment. This allows end-users to integrate the products with other data from other sources such as charts of subsurface infrastructure like water and wastewater pipes, climate change scenarios (e.g. sea level rise, groundwater head, precipitation and runoff), and projections of vertical positions of surface and subsurface assets.
The consortium is now investigating further the link between PSInSAR deformation and geotechnical data, by developing:
a method to do mathematical modelling of the time series
tools to investigate the link between PSInSAR deformation and various datasets related to geology, hydrology, land use, climate etc.
an advanced model of the expected ground subsidence given a certain geotechnical configuration
The results and developments from this ongoing project will be presented.
Ground movement of railway embankments is often monitored using total measurement stations in the UK. These require personnel to visit the site for each measurement, which is time consuming, often unsafe due to the proximity to the track, and can cause disruptions to train travel. Using satellite technology to monitor these assets would produce many improvements: the spatial extent would cover the entire rail network; the temporal period of measurement would extend back to the launch of the satellite; there would be less disruption to the network; safety would be improved; and difficult to access areas could also be monitored.
Satellite InSAR allows for remote monitoring of ground movements from space. SatSense routinely process all Sentinel-1 data covering the UK, providing an up-to-date, UK-wide ground movement product. InSAR detects relative movement along the radar line of sight. This is however only possible for areas where reflective properties do not tend to change between acquisitions (e.g. buildings, roads and rock). InSAR has already been used to monitor ground movement across the UK for a variety of applications and now SatSense are working with the main rail operator in the UK, Network Rail, to evaluate InSAR data on earthwork assets (typically embankments along the railway). Railway lines pass through areas with different levels of urbanisation, vegetation coverage, and different types of bedrock and soil. This study investigates the use of InSAR data for detecting and monitoring geotechnical failures and to understand the conditions at which InSAR is most and least effective in monitoring railway earthworks.
Network Rail provided SatSense with ground measurement data for 11 locations in South East England. One of these is a large-scale area, around 3 km wide, with measurements taken at 175 locations within the area. The earliest measurements started in 2008. The other 10 sites are of a smaller-scale, a few hundred metres wide, with between 10 and 20 locations at which measurements were taken. The 10 smaller sites have temporal extents between 6 months and 4 years. The ground data was measured using total measurement stations which record the angle and distance from reference stations to a target.
We found there to be excellent agreement between the satellite and in situ data at the large-scale site. The difference in displacement between the two data sources for each timestep was less than 10 mm for the majority of cases. The correlation is weaker for the smaller-scale sites which is thought to be caused by the ground monitoring points being very close together so that one InSAR resolution cell (representing ~4 x ~20 m) covers multiple ground monitoring points, making it impossible to distinguish relative movement between the ground data using InSAR. There also tends to be more vegetation in close proximity to the measurement locations at these sites.
In a parallel study, where InSAR data at 76 sections of railway was investigated for its usability, it was observed that, unsurprisingly, the spatial coverage and noise of the InSAR data suffered in areas of dense vegetation and rural settings. This is expected as there are fewer constant radar reflectors in rural and highly vegetated areas.
If InSAR data is able to be used to monitor the railway embankments, it could provide network-wide standardised data which would help to identify the areas most at need of maintenance. This would provide a more reliable and safer network for passengers. It is predicted that there will be increased traffic and load on the UK’s rail network in the future, as well as more extreme weather events. An improved monitoring system could alleviate some of the pressures caused by these shifts.
With the advances in SAR Interferometric techniques, it is now possible to monitor the ground deformations precisely. Recent studies have shown that Persistent Scatterers Interferometry (PSI) is a promising technique for detecting ground motion caused by natural and man-made phenomena. In this study, we have used a Cloud-based processing platform known as Geohazards Thematic Exploitation Platform (GEP) to analyze the ground motions using the PSI technique through an onboard service called SNAPPING (Surface motioN mAPPING), which is designed for Multi-Temporal DInSAR processing based on integrated SNAP and StaMPS (Stanford Method for Persistent Scatterers InSAR package) techniques. The capability of GEP to access and process large quantities of Sentinel-1 data in a short time enables us to monitor the ground deformation regularly.
The monitoring of the stability of buildings in the Palu region in Indonesia was carried out after the massive earthquake and tsunami in 2018 using Sentinel-1 data acquired between 2018 and 2021, using both ascending and descending passes. The line of sight velocity values from the SNAPPING processing service was used to obtain the ground motions for each building to analyze their stabilities. The Building stability was categorized into Low, Medium, and High motion classes by considering the distribution of the motion values.
We also compared the building stability results obtained from the Rhetcius platform for the same area with identical conditions. The Rhetcius building stability products are based on persistent and distributed scatters derived using SPINUA (Stable points INterferometry even over Un-urbanised Area) algorithm. Results from both GEP and Rhetcius platforms showed a good correlation.
Globally, wildfires are a major natural hazard, which disrupts natural ecosystem services, causing loss of lives, properties and infrastructure [1]. Climate change is playing an increasing role in determining wildfire regimes, with future climate variability expected to enhance the risk and severity of wildfires in many biomes including Southern Europe, according to the 2019 IPCC Report [2]. Scenarios for global warming greater than 1.5°C could lead to a 40% increase in Mediterranean burned area [3], turning the likelihood prediction of large fire events into a necessity for better fire management. Hence, developing fire danger forecasting systems which are linked with the operational authorities (Civil Protection, Fire Brigade/Service etc.), would increase their preparedness and enhance their emergency response capacity.
In Europe, fire likelihood estimation is traditionally produced by the meteorologically derived Fire Weather Index (FWI), disseminated daily by the European Forest Fire Information System (EFFIS) [4]. Our work [5] takes a step forward, exploiting historical Earth Observation (EO) data, Land data, and human related data, in addition to meteorological data, to predict next day’s fire danger with Deep Learning (DL) at national scale.
Technically, i) the complex interactions between fire drivers, ii) the stochasticity of wildfire occurrence as well as iii) the scarcity of fire ignitions, make the fire danger estimation a challenging problem to model with Machine Learning (ML). This, along with the lack of publicly available datasets, is why only a few works have been published that treat fire danger forecasting as a DL problem [6, 7]. Huot et al. [7] view the problem as a segmentation task. However, their binary predictions (fire, no fire) ignore the stochasticity of fire occurrence. They also do not identify the fire ignitions associated with the fire mask, which can lead to data leakage. Zhang et al. [6] train a Convolutional Neural Network (CNN) to model forest fire susceptibility, exploiting only the spatial context of the phenomenon. Other works that use ML to face the problem [8], use shallow models that cannot handle the spatio-temporal context.
Our work makes the following contributions:
- We create, harmonize and publish a large datacube [9] containing fire drivers and burned areas for years 2009-2020 in Greece.
- We rigorously formulate fire danger forecasting as an ML problem. To this end, we identify the exact date of fire ignitions, which allows us to model the joint probability of a fire occurring and becoming large. Moreover, we do a time split to create training, validation and test sets and we do careful negative sampling to account for the dataset imbalance.
- We train three different DL models that are able to capture spatial, temporal or spatio-temporal context, and compare them against a Random Forest (RF). This leads to the development of an operational prototype in partnership with Hellenic Fire Service, that is used to forecast the next day's fire danger for Greece.
We use as input variables daily weather data (temperature, wind, precipitation) from ERA-5 Land, satellite variables (Leaf Area Index and Fraction of Photosynthetically Active Radiation; Normalized Difference Vegetation Index and Enhanced Vegetation Index; Day/Night Land Surface Temperature) from MODIS, Roads Density from OpenStreetMap, Population Density from worldpop.org, Land Cover from Copernicus Corine Land Cover, and Topography variables (elevation, aspect, slope) from Copernicus EU-DEM. We intersect historical burned areas from EFFIS [4] with MODIS active fire product to extract the target values. The data are processed and harmonized in a 1 km x 1 km x 1-day resolution datacube, a multi-variate data set of Earth System variables on a common grid, which covers Greece for the years 2009 – 2020.
Having the analysis-ready datacube at hand, we conduct analytics on top of it, to understand our datasets, identify correlations and shed some insights about how the different covariates interact with each other and how the different values lead to the ignition or not of a large fire. Useful patterns are discovered between these binary interactions, suggesting that the use of ML algorithms can lead to the discovery of more complex interrelations of the covariates.
Hence, we model the joint probability that a fire ignites and becomes large (>30 hectares) using ML/DL. We extract from the datacube four different datasets to be used for training different ML/DL models. A pixel dataset that is used for training a Random Forest, a temporal dataset that is used for training a Long-Short Term Memory Neural Networks (LSTM), a spatial dataset that is used for training a Convolutional Neural Network (CNN) and a spatio-temporal dataset that is used for training a Convolutional Long-Short Term Memory (ConvLSTM) [10] Neural Network.
Since we are interested in the fire likelihood estimation and not just in a hard binary prediction, we consider Area Under the Receiver Operating Characteristic (AUROC) to be the most important metric for the evaluation of the different models. Our results show that in terms of AUROC, ConvLSTM achieves the best performance with a score of 0.926, suggesting that both spatial and temporal contexts are important for modeling the fire danger forecasting problem.
The proof of concept that we suggest has been pre-operationally demonstrated, by the production of a daily fire danger map of Greece that was sent to the Hellenic Fire Service during the summer of 2021. This year was a devastating summer for the country in terms of extreme fire events, as more than 110,000 hectares were burned, so the analysis of the daily maps as well as the interpretation of models’ predictions with explainable AI have provided us with useful insights, which we want to share with the community.
In conclusion, in this work we formulated daily fire danger forecasting as a machine learning problem and published a harmonized country-wide datacube. We implemented some simple, yet effective DL models on top of this datacube, demonstrating that DL can be used for wildfire forecasting in an operational context.
References
[1] M. Lucrecia Pettinari and Emilio Chuvieco. Fire Danger Observed from Space. Surveys in Geophysics, 41(6):1437–1459, November 2020. ISSN 1573-0956. doi: 10.1007/s10712-020-09610-8. URLhttps://doi.org/10.1007/s10712-020-09610-8.
[2] P.R. Shukla, J. Skea, R. Slade, et al. (eds.) Technical Summary (2019) In: Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems.
[3] Marco Turco, Juan José Rosa-Cánovas, Joaquín Bedia, Sonia Jerez, Juan Pedro Montávez, Maria Carmen Llasat, and Antonello Provenzale. Exacerbated fires in Mediterranean Europe due to anthropogenic warming projected with non-stationary climate-fire models. Nature communications, 9(1):1–9, 2018. Publisher: Nature Publishing Group.
[4] Jesús San-Miguel-Ayanz, Ernst Schulte, Guido Schmuck, and Andrea Camia. The european forest fire information system in the context of environmental policies of the european union. Forest Policy and Economics, 29:19–25, 2013. ISSN 1389-9341. doi: https://doi.org/10.1016/j.forpol.2011.08.012.
[5] Prapas, I., Kondylatos, S., Papoutsis, I., Camps-Valls, G., Ronco, M., Fernández-Torres, M.Á., Guillem, M.P. and Carvalhais, N., 2021. Deep Learning Methods for Daily Wildfire Danger Forecasting. arXiv preprint arXiv:2111.02736.
[6] Guoli Zhang, Ming Wang, and Kai Liu. Forest fire susceptibility modeling using a convolutional neural network for yunnan province of china. International Journal of Disaster Risk Science, 10(3):386–403,2019. Publisher: Springer
[7] Fantine Huot, R. Lily Hu, Matthias Ihme, Qing Wang, John Burge, Tianjian Lu, Jason Hickey, Yi-Fan Chen, and John Anderson. Deep Learning Models for Predicting Wildfires from Historical Remote-Sensing Data.arXiv:2010.07445 [cs], October 2020. http://arxiv.org/abs/2010.07445. arXiv: 2010.07445.
[8] Piyush Jain, Sean C.P. Coogan, Sriram Ganapathi Subramanian, Mark Crowley, Steve Taylor, and Mike D. Flannigan. A review of machine learning applications in wildfire science and management. Environmental Reviews, 28(4):478–505, 2020. doi: 10.1139/er-2020-0019. https://doi.org/10.1139/er-2020-0019.
[9] Ioannis Prapas, Spyros Kondylatos, and Ioannis Papoutsis. A datacube for the analysis of wildfires in Greece, June 2021. https://doi.org/10.5281/zenodo.4943354.
[10] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems, 28:802–810, 2015.
Wildfires are a natural component of the Earth system, important for nutrient release and vegetation growth. However, it is clear climate change is contributing to more frequent, more destructive and less predictable wildfires worldwide. Australia in particular, experiences bush fires almost every summer, but the devastating bushfire season of 2019-2020, known as the Black Summer, was unprecedented in its severity and scale, killing dozens of people and destroying thousands of homes. Closer to home, a number of European countries suffer from extended summer drought conditions that make forests and woodland extremely prone to fire.
Quantifying and monitoring fires is fundamental in mitigating their negative impact on the environment and society, but also for the ongoing study of climate as wildfires significantly influence global atmospheric emissions and climate change. The increasing availability of Earth Observation (EO) data, the exceptional processing power of cloud computing and the advanced analytics provided by Artificial Intelligence (AI) and Machine Learning (ML) provides an opportunity to develop new services delivering quick access to burned area mapping information.
The development of such as service has been the focus of the ESA-sponsored Artificial Intelligence for Earth Observation (AI4EO) Wildfires project, whose aim was to create a service that can automatically map burned areas at an unprecedented level of detail and also provide the fast, reliable and accessible service required by the wildfire fighting community. This level of detail was enabled by the use of Sentinel-2 10m resolution data seeing the Earth’s surface nominally every 5 days.
This service was developed by adapting a methodology used to create a database of historic global wildfires for the ESA Climate Change Initiative (CCI) Fire project for Sentinel-2 data. The methodology was modified to provide an extensive training set covering a wide variety of environments, fire regimes and wildfire events. This training set was used to develop ML classification solutions (random forest and support vector machines) for numerous historic wildfire events. These solutions were incorporated into a service portfolio in the demonstration service, which automatically selects the most appropriate ML solution for the input data based on user-defined parameters such as location and time between pre- and post- fire images. The resulting classification product maps burned areas along with other products estimating the burn severity. Accuracy indicators from validation sites indicated that commission errors were regularly below 9% and omission errors were in most cases less than 20%.
The resulting demonstration service has been deployed on the ESA-sponsored Earth Observation for Sustainable Development Laboratory (EO4SD Lab ), which is an open online portal that provides access to EO data, tools, services and analysis capability to support the development and use of EO-derived information products. Key users from Australia and Europe have used the deployed service on the EO4SD Lab to evaluate the service and compare with existing approaches, with positive results. The service remains available and open for users to explore.
Since the beginning of land cultivation locust outbreaks, and plagues have been a danger to the human population worldwide and often brought devastation, hunger and death (Zhang et al., 2019). All continents except for Antarctica have been infested by different locust species, which are capable to affect the livelihood of ca. 10% of the global population (Latchininsky and Sivanpillai, 2010). Recently, swarms of desert locusts (Schistocerca gregaria) endanger food security across East Africa, the Arabic peninsula, India and Pakistan (Meynard et al., 2020).
One of the major goals of locust monitoring is assessing the geographic extent of possible breeding areas and infestation, highlighting gregarization hot spots, evaluation of population parameters and accordingly initiating control activities. Despite the danger of gregarious locusts for humanity, the ability to predict and manage locust outbreaks is still insufficient (Latchininsky, 2013). Detailed spatial knowledge about locust habitats or suitable breeding areas are of major importance for regional and national plant protection and locust monitoring organizations because it demands a lot of financial means, manpower and time. In this context, remote sensing data and applications showed great potential as an additional source because they perform efficient, more economical, with less manpower and are regardless of national borders (Kambulin, 2018).
In this presentation, we show a holistic habitat suitability index (HSI) approach which takes advantage of different environmental variables within an ecological niche modelling (ENM), in combination with Sentinel-2 based time-series analyses and further species-specific knowledge to better discriminate areas providing optimal breeding conditions. We apply the approach for three different locust species to demonstrate the advantages and challenges and, in this way, contribute to further development in this field. The presented results will focus on Italian locust in North Kazakhstan, on Moroccan locust in South Kazakhstan and on desert locust in Amish river basin in East Africa.
With the application of ENM as part of HSI, the information value based on climatic and soil preference components defining locust species’ ecological niche are maintained. In addition, up to date land surface parameters, vegetation development and other species relevant environmental parameters were incorporated in the HSI model. Moreover, human interaction and actual land surface dynamics play a crucial role for locust outbreaks and influence and define suitable breeding areas. The results are validated with ground truth data collected by local organizations and show spatial and temporal improvements. Therefore, the results show high potential to enable a better prioritization and spatial focus for field monitoring to improve planning and control outbreaks without a significant loss in accuracy but an improvement in spatial detail.
Traditional applications of Interferometric Synthetic Aperture Radar (InSAR) data involved inverting an interferogram stack to determine the average displacement velocity. While this approach has useful applications in continuously deforming regions, much information is lost by simply fitting a line through the time series. Thanks to regular acquisitions across most of the the world by the ESA Sentinel-1 satellite constellation, we are now in a position to explore opportunities for near-real time deformation monitoring. In this paper we present a statistical approach for detecting offsets and gradient changes in InSAR time series. Our key assumption is that 5 years of Sentinel-1 data is sufficient to calculate the population standard deviation of the detection variables. Our offset detector identifies statistically significant peaks in the first, second and third difference series. The gradient change detector identifies statistically significant movements in the second derivative series. We exploit the high spatial resolution of Sentinel-1 data and the spatial continuity of geophysical deformation signals to filter out false positive detections that arise due to signal noise. When combined with near-real time processing of InSAR data these detectors, particularly the gradient change, could be used to detect incipient ground deformation associated with geophysical phenomena, for example from landslides or volcanic eruptions
Improving landslide mapping by multi-temporal SAR data analysis and Deep Learning
Wandi Wang1,2, Mahdi Motagh1,2, and Magdalena S. Vassileva1,2
1 GFZ German Research Center for Geosciences, Section of Remote Sensing and Geoinformatics, 14473 Potsdam, Germany
2 Institute of Photogrammetry and GeoInformation, Leibniz University Hannover, 30167 Hannover, Germany
Landslides are a major type of natural hazard that cause serious economic losses, casualties, and damages to buildings, and critical infrastructures in mountainous regions around the world. The advent of satellite remote sensing brought about a revolution in the field of landslide investigation. Both optical and Synthetic Aperture Radar (SAR) satellite data are increasingly being used in detecting, monitoring, and assessing landslide hazards. The optical images reflect the abundant spectral information and geometric shape of ground objects, which can be used for surface change detection in landslide areas. However, methods using optical imagery cannot reliably support near real-time landslide change detection because cloud-free images covering landslide areas may not be readily available before and during a given landslide event. Active measurements from SAR systems offer new opportunities to support systematic mapping and monitoring of landslides over extensive regions independent of weather and sunlight conditions. Variation in SAR amplitude and coherence allows to detect structural and surface changes related to landslides. Moreover, interferometric measurements using the InSAR technique can be used to detect pre-failure and post-failure motions, which are key for hazard and risk assessment in landslide prone regions. The availability of free and open data from Earth observation programmers such as Copernicus has further improved our capability in using satellite observations for multi-scale analysis of landslide occurrence and evolution. In this study, we propose a methodology for identifying landslides that occur in vegetated regions using dual-pol Sentinel-1 SAR data and machine learning. Both amplitude and phase information from Sentinel-1 SAR data are used as input parameters for deep learning models to detect landslide changes and segment them automatically. Several examples of big landslides in China, Kyrgyzstan, and Iran are presented for which high quality external datasets and our own field observations allow detailed ground truthing needed for validation of the results.
As Climate change increases the vulnerability of countries towards flood events, analysing the publicly available earth observation data with sophisticated algorithms has the potential to improve global risk management. However, moving from a small scale to country scale algorithm deployment in order to have a real impact requires significant technical adjustments.
In order to be able to have a near real time flood monitoring system and risk assessment for large geospatial areas we have built a data pipeline that leverages a variety of cloud technologies and geospatial open source standards. We have been using Google Cloud’s Dataflow service built on Apache Beam, to define and run jobs that batch download and preprocess a diverse array of public geospatial datasets, and then run convolutional machine learning inference to generate model-versioned flood maps. Dataflow is a scalable solution with parallelisation of the workloads under the hood. It has auto scaling feature for machine resources with GPU hardware acceleration. At the same time, it is also cost-effective, as the user only pays for the workloads that are queued, avoiding idle machine expenses. These pipelines can be manually queued, or kicked off automatically in real time as data sources update with new payloads. At the same time, in our data preprocessing steps, we are converting our spatial datasets into Cloud Optimized GeoTiffs (COG) which allow for flexible range querying. Also, in order to keep track of spatial datasets, we utilise an indexing server that implements the SpatioTemporal Asset Catalog (STAC) open source standard. This indexing server allows us to clearly organise and query our preprocessed satellite imagery and versioned flood maps, while still storing the raw data in Google Cloud Storage. As a final step, we utilised Github Action and added a Continuous Integration (CI) for these pipelines using the DirectRunner feature of Dataflow in order to have robust testing and infrastructure.
Abstract:
In this paper, we present the United Nations Satellite Centre (UNOSAT) FloodAI solution: an end-to-end pipeline based on Copernicus Sentinel-1 Synthetic Aperture Radar (SAR) imagery of flood-prone areas are automatically downloaded and then processed by a deep learning model based on a Fully Convolutional Neural Network that returns flood vector data to update live flood maps for emergency response purposes to support the humanitarian community during flood events providing with as much information as possible on the evolution of the water extents. FloodAI was deployed on GPUs at the European Organization for Nuclear Research (CERN) connected to a 64 TB data storage server and a high-speed CERN internet connection. The entire infrastructure is currently in the process of being transferred to a cloud centralized service at CERN built on Kubeflow, a machine learning platform on Kubernetes. Therefore, we heavily relied on a combination of software, hardware, high-speed internet connection, and data storage server to turn the proof-of-concept deep learning model into a Near-Real-Time monitoring service. To reach the results, we have performed qualitative and quantitative analyses based on real case floods to test the machine learning model’s performance. Then, we have built a series of case studies in Mozambique, Bangladesh, Myanmar, and Nepal to identify where the results were successful and can be used during an operational rapid mapping setting to respond to a disaster prior to the implementation of a human in the loop protocol to review the results from the pipeline. We have also assessed different ways in which the platform is used by analysts and decision-makers to identify positive examples of successes, and places in which the platform did not perform as intended. In general, prioritizing the quality of the dataset following a data-centric approach turned out to be the most appropriate approach to improve model performance. We also find that access to dynamic flood updates together with impact analyses such as the statistics of the exposed and affected population, and inaccessible roads have the potential to support the United Nations country teams, relief agencies, civil protection, government agencies, national disaster management institute, policy and decision-makers during the disaster response. However, finding ways to leverage the time gained from the automation enabled by machine learning algorithms, and transfer it to the emergency response indicators is the challenge to positively impact the exposed populations and provide assistance to emergency responders particularly those in the field. Collaborating with end-users in the field as well as experts in other domains is key to integrating AI-based tools into the existing emergency response protocols providing access to the requested information in the most efficient format at the right time. UNOSAT continues to work on the development of deep learning methods and aims at implementing additional services in the near future such as a multi-sensors food detection solution.
Authors: Nemni, E.; Belabbes S., Maskfiq K., Tawala, J., Hunger, T., Bunnasarn
W., Sateiandee S., Dell’Oro L., Bjorgo E.
The Carpathian Basin is subject to a wide range of weather conditions throughout the year. On one hand, the rainy periods at the end of winter cause shallow inundations on the flat areas, while on the other hand, long dry periods during spring and summer can cause drought in the same regions. On the flat areas with small infiltration capacity, inland excess water (IEW, sometimes also described as ponding water, waterlogging) can be present for periods ranging from one or two days to several weeks or months. In contrast to fluvial and coastal flooding, inland excess water remains on the surface due to limited runoff, infiltration and evaporation, or in places where groundwater flowing towards lower areas seeps through porous soils to the surface. This phenomenon is also known in other low-lying countries (the Netherlands, Poland, Germany), although particularly in Hungary, IEW has a long research tradition. Climate change models predict increased precipitation intensity in the Carpathian Basin, which may increase the risk of IEW in the future. To effectively address the floods and take measures to prevent them or mitigate their damage, it is important to understand where and why these natural hazards occur. The inundations can be very dynamic in nature. Depending on meteorological conditions (temperature, precipitation, windspeed) before, during and after their development, they can quickly appear, but also can disappear fast. Therefore, it is important to use as many satellite images as possible for their monitoring, including the ones with atmospheric disturbances (clouds). In this study, we present methodologies based on different indices and classification methods as well as more advanced deep learning methods to extract inland excess water using Sentinel-2 satellite imagery. Although the size of individual IEW patches varies considerably, ranging from tens of square meters to 1000s of square meters, earlier research has shown that Sentinel-2 has sufficiently high spatial resolution to detect them outside of urban areas. In the study area in Hungary, Sentinel-2 has an average temporal resolution of three days, but due to the nature of the studied phenomenon, the images usually contain many clouds. To be able to monitor the development of IEW, we use every pixel in the images that is available, even when a large part of the image is cloudy.
In our research, we use threshold based segmentation of Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI) and Modified Normalized Difference Vegetation Index (MNDWI) to create binary (water - no water) maps. We also apply supervised classification using more traditional machine learning models (Maximum Likelihood (ML), Random Forest (RF) and Support Vector Machine (SVM)) to create thematic maps with 9 classes. The classes include the desired water classes, and other land cover classes. Some classes have a light and dark version when they are covered by cloud shadows. This “dark” classes approach improves the classification results significantly. The thematic maps are reclassified to binary water maps to make them comparable with the index based results. The third group of algorithms that we apply to extract the shallow water bodies is the artificial neural network based models. Two approaches are evaluated. The first approach is a densely connected deep neural network (DNN) that is fed with the same training set as was used with ML, RF and SVM. The DNN results in a thematic map with 9 classes that is reclassified to a binary water map. The second approach is a convolution neural network that is trained with known permanent water bodies. The result is a binary water map. All binary output maps are compared to an independent, manually derived validation set. The aim of the research is to evaluate which methodology is optimal for monitoring the position and extent of IEW inundations in large areas, with sufficiently high spatial and temporal resolution. The continuous monitoring can be used to understand the development of IEW, to mitigate the risk they pose on infrastructure and agriculture and to reuse the water in periods of drought.
Due to the increasing frequency of extreme events, satellite-based flood detection to support rapid emergency response and decision making is rapidly gaining interest. Current approaches for flood extent mapping rely on thresholding features and indexes from optical and Synthetic Aperture Radar (SAR) satellite images. However, this process is not automatic and requires the involvement of remote sensing experts with extensive domain knowledge to perform and validate the analysis. Deep learning models represent a promising alternative to these methods, as they can learn from the available database of satellite images and annotated flood events to automatically produce faster and more accurate flood maps. However, training deep learning models requires high quality datasets with accurate labels, balanced classes, and enough training samples representative of worldwide flood events. Producing these datasets is complex, time consuming, and requires significant remote sensing expertise. In this context, we present ml4floods, an open-source package, with an end-to-end pipeline for flood extent mapping which includes data acquisition, preprocessing, model training and model deployment. The package implements training and testing of deep learning models for Sentinel-2 flood extent segmentation based on WorldFloods. It additionally contains routines for downloading flood data stored in emergency response databases, co-located Sentinel-2 images, and other related products.
Using this framework, we extended the WorldFloods dataset by including 26 additional flood maps for test and validation, which are geographically diverse and contain high quality labels. This allows to reliably validate flood segmentation performance worldwide. We also extended the training dataset with new flood events and fixed major errors in the labels. Moreover, we propose several deep learning models and validate them against our diverse test dataset. In particular, we propose a novel model that performs multi output binary classification instead of multi-class classification. This model predicts in one output channel the clear/cloudy classes and in another one land/water, whereas the former models predict 3 mutually exclusive classes (land/water/cloud) in a single output channel. With this new model we are able to correctly classify land/water in partially cloud covered areas with thin and semi-transparent clouds. We provide several performance metrics and compare the models trained with different channel configurations. For validation purposes, we also compare the results against a thresholding method based on the Modified Normalized Difference Water Index (MNDWI), with favorable results for our models. To the best of our knowledge, this is the first work that provides a global geographically diverse dataset to benchmark flood detection algorithms on medium resolution multispectral satellites such as Sentinel-2.
Acknowledgements: This work was supported by the Spanish Ministry of Science and Innovation under the project PID2019-109026RB-I00.
Large-scale marine oil spills are one of the major threats to the ocean and coastal ecosystems. Among the main causes are human errors in oil-drilling rig, bad conditions pipelines and illegal oil discharges into the sea. In 2010, there was one of the largest oil spills on the planet known as Deepwater Horizon (DWH). This event took place in the Gulf of Mexico from April 2010 to January 2011, in which 594,000 tons of oil were discharged into the ocean, causing serious environmental detriments. Several other accidents have also occurred in the Gulf of Mexico, the Ixtoc-well spilled approximately 530,300 tons of oil from June 1979 to March 1980, and recently (October 2019), Cayo Arcas maritime terminal discharged approximately 90 tons of oil, affecting an area of 4,484 Ha. Knowing the severe damages these disasters can cause, it is imperative to detect in a prompt and reliable manner the location and extent of the oil spills to take actions against their harmful effects. Remote sensing techniques using Synthetic Aperture Radar (SAR) have been widely used for oil spills detection and monitoring their evolution. They provide us with the capabilty of monitoring the sea surface during day, night or under adverse weather conditions, by detecting changes in the wave properties due to the presence of oily fluids. An oil layer inhibits the formation of short gravity-capillary waves resulting in darker spots in the SAR images. In the absence of oil and the presence of sea surface winds, the backscattering of microwaves is accentuated producing brighter regions in the images, usually related to non-oil sea surface areas.
There are some inherent problems in the detection of oil spills using SAR sensors. The oil layer on the sea surface is modified by the influence of ocean surface dynamics and dissipated into the environment due to multiple physico-chemical processes that modify its composition, such as evaporation, dispersion, emulsification, dissolution, oxidation, sedimentation, and biodegradation. In this way, oil spills are inherently dynamic, which makes their temporal observation difficult. Fortunately, satellite SAR sensors offer a great coverage of the ocean surface, therefore in early stages, the spill can be detected accurately. There are several methods for oils spills detection that use image processing techniques, such as, adaptive thresholds, hand craft features based on single radiometric and polarimetric image values, and very recently Deep Learning (DL) methods. The popularity of DL methods is related to several factors, among the most important are the increase on computing performance (Cloud Computing, high-end CPUs and GPUs) and the availability of larger datasets (in particular marine oil spills) that allows a robust training of complex models (such as the interaction between oil spills and ocean dynamics), leading to high accuracy classification rates.
In this work, we propose a SAR image classification method (at a pixel-level) based on Deep Neural Networks (DNN). This method´s main steps are: 1) Database creation and labeling; 2) Image filtering; 3) DNN Training process; and 4) Binary pixel-level classification and post-processing (correction) using sea-surface wind field estimation. Due to the lack of public oil spill databases fitting our research needs, we created our own database consisting of a large image dataset that guarantees a correct learning process of the DNN, which in turn reduces “classification overfitting”. Our database consists of 32 full-resolution Envisat images (8000x8000 pixels) of the DWH oil spill, corresponding to 28 Wide Swath Medium resolution-WSM (150m spatial resolution) and 4 Image Mode Precision-IMP (25m spatial resolution) with vertical-vertical (VV) co-polarization. The 32 full-resolution images were split into 64,961 sub-images, with dimension of 224x224 pixels, and pre-processed to create a multiple-input channel IM = {IO, IG, IV} to the DNN. The first channel IO represents the radiometric values of original SAR Images, the second and third channels represents the output of Gaussian and Variance filters of IO, called IG and IV respectively. IG image is a low-pass filter with homogeneous dark areas information of oil spills, while IV incorporates texture information that encapsulates oil spill transition regions. 80% of the filtered images were labeled as oil and not oil (sea-surface) regions for the DNN training process, the remaining images (20%) were used for the classification process. Our DNN is based on U-net, a pre-trained auto-encoder model developed for pixel-level classification (semantic segmentation) of biomedical images. U-net Decoder-block was re-trained with our three SAR input channels IM, in order to learn and solve our oil pixel classification problem. Our experimental results show high accuracy pixel-level oil classification of 97.84% in both resolution images (WSM and IMP), which proves the effectiveness of our proposed methodology. These results provide us with the confidence to proceed with our plan to generate an automatic system to detect oil spills in the Gulf of Mexico and to systematically monitor their evolution.
Lava flows are the primary hazards associated with effusive activity of volcanoes. Monitoring and mapping the extension and advancement of a lava flow may help to reduce the damages to people and infrastructures. Space agencies nowadays release a great amount of row data and finished products useful for volcano monitoring. Advancements in satellite remote sensing offer incomparable opportunities for detecting eruptive activity thanks to the massive amount of data with varying degrees of temporal and spatial resolution provided from a number of sensors operating in the visible, infrared, thermal, and microwave regions of the electromagnetic spectrum. A human analysis of the data available for the monitoring could be time consuming and inappropriate in real-time applications, due to the great quantity of satellite data, from different sensors and with different temporal, spatial and spectral resolutions, and for the limited time available for the analysis. However, the current developments in cloud computing and artificial intelligence algorithms have made the monitoring of volcanic hazards from space more feasible for volcano observatories. Here, we present a robust approach to detect and map the lava flows that exploits the potential of machine learning techniques to analyze automatically the large quantity of available satellite data. In particular, we build a model capable of detecting and mapping a lava flow, identifying the eruptive activity and following and mapping the lava flow emplacement, quantifying the areal coverage and lava volume (Figure 1).
Firstly, we obtain and pre-process different satellite data (such as ESA Copernicus Sentinel 1 and 2 data, NASA & USGS Landsat 8 data, NASA Terra data and ESA EUMETSAT MSG) via the cloud platform Google Earth Engine. The satellite images are analyzed using a model based on the machine learning and deep learning approaches, developed in a cloud-based platform, like Google Earth Engine or Google Colaboratory. We train the model with the great amount of satellite data available from different volcanoes and make the model ready for new eruptions, to have a near real-time analysis. In detail, we developed an alert system that recognizes a new eruption and automatically signals the beginning of the activity. Subsequently, new satellite images available are fed to the deep learning algorithm, which recognizes and segments the relative components of the images, linked to the volcanic activity. In this step, different bands are used. We distinguish between lava flows and background regions through the different spectral responses in the bands from Visible to Short-Wave Infrared. We built a robust machine learning model taking as input a vector of features based on the specific response of lava flows in a portion of the spectrum. Lava flows with different chemical and physical compositions reflect in a different way. Thus, our machine learning model exploits this spectral information to classify the images, using different bands. Finally, if a lava flow is present, the model will map the emplacement of the lava flows and it will calculate the areal extension and the volume of the lava flows. We will describe and demonstrate the operation of this approach investigating the recent lava flow-forming eruptions occurred between 2020 and 2021 at Geldingadalir (Iceland), Cumbre Vieja (Spain) and Etna (Italy) volcanoes.
Satellite-based Earth Observation (EO) is a key technology for applications like emergency management, civilian security, environment and hazard monitoring. Consequently, demands on amount, type and quality of remote-sensing satellite data and efficient methods for data analysis have increased sharply in recent years. However, the use of satellite-based image products for scenarios which require very low latencies, especially rapid meteorological and civil security applications like disaster management, is still limited by the bottleneck created by the classical EO data chain, which involves the acquisition, compression, and storage of sensor data on-board the satellite, and its transfer to ground for further processing. On-board processing offers a promising solution to reduce the latencies between data acquisition and product delivery to the end user. The H2020 EU project EO-ALERT (http://eo-alert-h2020.eu) implements this approach through the development of a next-generation EO data processing chain that moves key elements from the ground segment to on-board the satellite. By optimising the classical EO processing chain in a number of critical aspects, including high-speed avionics, Flight Segment/Ground Segment communications, on-board compression and data handling, and on-board image generation and processing, the system enables the delivery of the EO product directly to the end user with very low latency. In EO-ALERT, the capabilities of the approach are tested for multiple reference user scenarios: Autonomous ship detection for maritime surveillance from both SAR and optical imagery, providing a service similar to the EMSA Vessel Detection
Service (VDS); meteorological nowcasting for early warnings of convective storms from optical/multispectral imagery, similar to the EUMETSAT NWCSAF’s Rapidly Developing Thunderstorms - Convection Warning (RDT-CW) product; and ocean wind speed and wave height detection from SAR imagery.
This article presents an overview of the EO-ALERT architecture in terms of hardware and performance, and demonstrates its capabilities and applicability to near-real-time natural hazard monitoring applications by focussing on EO-ALERT’s extreme weather product for meteorological nowcasting of convective storms. It describes the development approach of the machine learning-based on-board processing solution for the early warning system, which is trained and tested on a specifically created dataset of multispectral MSG-SEVIRI images and ground truth data from the OPERA weather-radar network. By applying artificial-intelligence-based image processing, storms can be detected before they are seen in radar data on ground. Alerts for detected convective cells are then sent to ground on-the-fly before the actual raw-data is transmitted. The implementation of this solution in SW and HW and its test on an avionics test bench with real EO data is discussed in order to demonstrate that the performance and latency requirements for severe weather detection can be met. Inspired by and designed to complement NWCSAF’s RDT-CW product, the system is able to send the processed information and EO-product to ground within 5 minutes of the observation. Such capabilities could be employed in future satellites, either single-satellite systems (i.e., GEO satellites) or in constellations of LEO (polar) satellites in order to greatly increase the responsiveness to extreme weather events and other types of natural hazards. This innovative severe weather detection and warning service can allow protecting property and, more importantly, saving lives.
Climate change is increasing the frequency and severity of floods affecting both developing and developed countries across the globe. In 2020, the Centre for Research on the Epidemiology of Disasters (CRED, UClouvain)[1] reported that 23% more floods than the annual average of 163 events, and 18% more flood deaths than the annual average of 5,233 deaths were recorded. A crucial task during flood emergencies is to rapidly map the affected areas and use the information to support disaster response and relief efforts.
Earth Observation is playing a major role in emergency mapping. In particular, Synthetic Aperture Radar (SAR) imagery is routinely used to determine flood extent and further derived products. Compared to optical data, SAR satellites have the advantage of all-weather and day/night image acquisition capability. Deep Learning(DL) algorithms helped in the advancement of computer vision tasks like classification, segmentation, etc. Because of the high computation and mainly the introduction of Convolutional Neural Networks(CNN), DL algorithms are being explored in a vast number of application areas. For flood mapping and monitoring tasks, CNN[2], Deep CNN[3], and U-Net[4, 5, 6] were proposed in previous studies.
We investigated novel DL methods for flood detection as a segmentation task using uni-temporal Sentinel-1 data captured in the Sen1Flood11[7] dataset. In particular, we proposed two different model architectures. The first is a variation of U-Net architecture and the second one is a fusion network on three modalities. The proposed methods have been tested combining SAR data with available low-resolution elevation and permanent water maps to assess the possible improvement using ancillary data. The experiments highlighted how the convolutional network, architectural modifications, and different learning techniques can learn the relationship between multiple modalities and help in achieving state-of-the-art results.
Single SAR images are not helpful in distinguishing between permanent and transient water. However, this can be resolved if we detect flood as a change detection task on multi-temporal images of the flooded regions. Hence, we extended our study to multi-temporal Sentinel-1 SAR data. In this part of the study, we propose an unsupervised DL method to detect floods as a change detection task due to the lack of ground truth. To achieve better generalization, the experiments are being conducted on data from 13 different sites which is in line with the Sen1Flood11 dataset. The final results will be reported at the Living Planet Symposium.
Satellite EO in support for Anticipatory Action: novel approaches to forecast drought conditions
Droughts are a major threat globally as they can cause substantial damage to society, especially in regions that depend on rain-fed agriculture. Acting early based on alerts provided by early warning systems (EWS) can potentially provide substantial mitigation, reducing the financial and human cost. Most EWS monitor current key biophysical and socio-economic factors to assess the possible exposure of vulnerable people to specific hazards and increasingly include expert knowledge and qualitative assessments of seasonal climate forecasts to assess the future development of food security, and define actions to mitigate possible losses. There is a growing interest to include forecasts of the impacts of these hazards. Additionally, it is unclear whether the satellite-based indicators and their associated thresholds used in these EWS clearly relate to the conditions of interest on the ground.
Droughts are complex climatological hazards that impact society in numerous ways. With no consensual definition, they are often defined as how they are perceived leading to several types of drought. Insufficient precipitation is described as meteorological drought which if it persists, can cause a decline in surface and subsurface water resources leading to hydrological drought, and eventually soil moisture decline and crop failure that cause agricultural drought. If these events adversely impact society then we deal with socio-economic droughts. Recently, additional definitions have been suggested that focus on ecological and flash droughts. In our work, we particularly focus on developing novel monitoring and forecasting tools in relation to agricultural drought.
Drought-related food insecurity is particularly devastating as it not only leads to food and water shortages but also perpetuates poverty and under-development. There is therefore a growing interest in moving toward a proactive humanitarian approach to such disasters by developing anticipatory actions based on forecasts. Additionally, being better prepared before a drought hits significantly reduces the costs and losses from these disasters. This talk will mainly focus on the work we have done in Kenya through various projects in collaboration with the National Drought Management Authority (NDMA).
Focusing mainly on pastoral areas in Kenya, we first assessed whether several commonly used, satellite-based drought indicators could be used to monitor effectively conditions on the ground throughout the 23 counties overseen by the NDMA. The NDMA rely on the standardized precipitation index (SPI) and the 3-month vegetation condition index (VCI) to empirically evaluate the biophysical situation, with set thresholds indicating drought intensity. We also evaluated if setting county-specific thresholds are adequate for a better classification of the drought conditions with these indicators. We then developed a suite a machine-learning techniques to forecast up to 12 weeks ahead the 3-month VCI at county and sub-county level. First, we focused on Gaussian Process modelling which uses historical observation of the indicator to provide weekly forecasts. This approach demonstrated high forecasting skill up to six weeks lead time. Next, we developed we included the interaction between the lagged information from indicators and variables like precipitation, soil moisture, and vegetation condition in an Auto-regressive distributed lag model. This approach allowed to accurately forecast VCI at longer lead times (up to eight weeks). Finally, to improve the accuracy and precision of agricultural drought forecast in spatially diverse regions, we developed a Hierarchical Bayesian Model which can better capture the variability of the indicator for the different agro-ecological zones and vegetation land covers. In most cases, this approach improved the lead times by another week. Providing highly skilful forecast on vegetation condition will allow disaster risk managers act early to support vulnerable communities and limit the impact of a drought hazard.
For more information, see:
Barrett, A.B., Duivenvoorden, S., Salakpi, E.E., Muthoka, J.M., Mwangi, J., Oliver, S. and Rowhani, P., 2020. Forecasting vegetation condition for drought early warning systems in pastoral communities in Kenya. Remote Sensing of Environment, 248, p.111886.
Bowell, A., Salakpi, E.E., Guigma, K., Muthoka, J.M., Mwangi, J. and Rowhani, P., 2021. Validating commonly used drought indicators in Kenya. Environmental Research Letters, 16(8), p.084066.
Salakpi, E.E., Hurley, P.D., Muthoka, J.M., Barrett, A.B., Bowell, A., Oliver, S. and Rowhani, P., 2021. Forecasting Vegetation Condition with a Bayesian Auto-regressive Distributed Lags (BARDL) Model. Natural Hazards and Earth System Sciences Discussions, pp.1-31.
Salakpi, E.E., Hurley, P.D., Muthoka, J.M., Bowell, A., Oliver, S. and Rowhani, P., 2021. A Dynamic Hierarchical Bayesian approach for forecasting Vegetation Condition. Natural Hazards and Earth System Sciences Discussions.
The ESA VISTA (Volcanic monItoring using SenTinel sensors by an integrated Approach) EOEOP5 project was aimed at developing a novel ensemble of algorithms to monitor volcanic eruption by exploiting the Copernicus Sentinel-3 SLSTR data. Nowadays, the increasing availability of satellited data with the operational Sentinel’s missions allowed an innovative perspective for monitoring of the volcanic emissions.
Anyway, the possibilities offered by the COPERNICUS Sentinel missions were only partially explored to provide new consistent and statistically reliable information about volcanic cloud detection and of the quantification of the ash parameters. Such information is crucial for aviation safety and civil protection purposes therefore new tools to exploit satellite observations are required. The VISTA project have developed specific methodologies integrating inverse modeling techniques (based on radiative transfer models) with Machine Learning procedures to formulate a set of novel integrated methods.
In particular, a novel Neural Network algorithm has been developed during the VISTA project for the inversion of the SLSTR radiances to retrieve three main as parameters such as the effective radius (Re), Aerosol Optical Depth (AOD) and Mass (M). It is well recognized that the definition of an inversion model based on Neural Networks architecture requires a careful optimization in terms of network architecture and a dataset which sufficiently represent the statistic of the parameters to be retrieved. In the VISTA project a procedure to train NN over four Latitudinal belts for ensuring the global monitoring of the retrieval models has been developed and tested. In particular, Radiative Transfer Model simulations have been used to produce large synthetic training, validation and test datasets for the NN training. The advantage of the proposed approach is multifold. Indeed, from one side the production of synthetic data through the RTM simulations resolve the limitation of producing training sets considering real Sentinel-3 data which are in general not sufficient or not present over all the volcanoes location around the world.
Furthermore, the application of the NN model to the discrete set of parameters simulated by the RTM approach performing a non-linear interpolation of the simulated data allowing the NN solution to perform efficiently also on subset of the input space non completely covered by the training set statistics. Finally, once the NN model is optimised and trained the application can be run very fast and within fully automated processing pipelines. For the optimization of the NN architecture for each considered latitude belts a simulated annealing approach which exploits different information criteria linked to the architecture complexity is applied.
The proposed approach has been tested by producing specific RTM simulation over the various considered Latitudinal belts and by comparing the results of the trained NN with the ones obtained by applied the state-of-art Look Up Table (LUT) procedure over the same RTM simulated data. Both the LUT and the NN solution have been applied over several recent volcanic eruptions, such as Raikoke, 2019; Etna, 2018; Ulawun, 2019; etc., acquired by Sentinel-3/SLSTR data. The achieved results demonstrated the feasibility of the proposed approach by registering values of the correlation coefficient between the NN and the LUT results, which ranges between the 65% and the 94% for the different ash parameters over the various volcanic events considered.
SAR-based flood mapping in urban areas is still a challenging task due to the complexity of backscattering mechanisms. It has been proven that the synergistic use of multi-temporal SAR intensity and InSAR coherence can compensate for their inherent respective limitations and improve the accuracy of urban flood mapping. In recent years, deep neural networks (DNNs) have been explored for urban flood mapping with satellite data and have shown promising results thanks to their powerful representation learning capacity. However, the scarcity of labeled training data can prevent the generalization of a DNN trained only on a few historical flood events that occurred in specific areas. In addition, unbalanced datasets make it even harder for a DNN to learn robust representation of different classes, which is usually the case for urban flood detection where the samples of ‘flooded urban area (FU)’ class are much less than the samples of ‘flooded open area (FO)’ and ‘non-flooded area (NF)’ classes. To address this problem, we introduce a plug-and-play module to integrate the prior knowledge of urban geometry into the DNN to formalize an urban informed DNN. The plug-and-play module consists of a channel-wise attention submodule and an urban conditional normalization submodule, which is able to calibrate the learned features thus yielding more robust representation and achieving better generalization to new flood events in different geographical locations. The urban geometry information is derived from a series of SAR intensity and InSAR coherence images, which can be prepared beforehand in the near real-time mapping scenario.
The effectiveness of the proposed approach was tested with the U-Net image segmentation framework based on four different urban flood events occurred in the USA, Japan, Mozambique, and Somalia, respectively. The bi-temporal (i.e., pre-event and post-event) Sentinel-1 VV and VH dual-polarization intensity and interferometric coherence were used as the input of U-Net, and three classes were considered as the output: FU, FO, and NF. Our results indicated that compared to the original U-Net, the urban informed U-Net using the proposed plug-and-play module achieved a noticeable improvement for FU detection without a significant increase of computation complexity: the F1 score was improved by 0.3 on average. It is worth mentioning that the high generalization and automation capability reached make this approach promising for disaster emergency management where timeliness is a critical factor and a robust trained DNN able to eliminate the human intervention is a valuable advantage.
In Europe extreme weather events are becoming more frequent and more severe. This is due to global climate change, urbanization, population growth, and environmental degradation. Higher temperatures, heat waves and drought increase also the likelihood of forest fires. Climate experts believe that this year's devastating global fire season is already a result of global climate change, providing a glimpse of what to expect in the future. In 2020, an area of approximately 10,000 km2 was burned in the EU, other European countries, the Middle East and North Africa, according to the recent EFFIS report on forest fires 2020. This corresponds to a larger area than the Republic of Cyprus. This underlines the need for reliable forest fire early warning and monitoring systems in Europe supporting firefighters, civil protection and citizens to cope with the risk and impact of wildfires. Besides the casualties, large economic losses are the consequence and the increased frequency of wildfires also induces an increase in the emissions of greenhouse gases, which in-turn contribute to global warming.
The Horizon 2020-funded project SAFERS - ‘Structured Approaches for Forest Fire Emergencies in Resilient Societies’ (https://safers-project.eu/) is developing an open and integrated platform featuring smart services that aim to increase society’s resilience against forest fires. The platform will use information from Copernicus and GEOSS, in-situ sensors, weather forecasts and crowdsourced data that can be used by citizens and first responders to provide situational in-field information.
Specifically, novel Machine Learning (ML) and Artificial Intelligence (AI) approaches are being developed as part of the SAFERS smart services. These are focusing on improving the accuracy and speed of mapping burned areas and their severity, focusing on Copernicus’ Sentinel remote sensing offer. Furthermore, the EO data together with multiple data sources are used to improve the understanding of risk in the wildfire context and integrate this information into the decision support system. Furthermore, the presentation will showcase the training datasets and methodologies used in order to develop these novel EO based AI solutions.
Wildfire hazard and risk early warnings are an essential component of hazard management. In SAFERS it is the goal to augment operational wildfire hazard and risk mapping to improve the dissemination, uptake, and updating of early warnings. This includes the integration of Copernicus European Forest Fire Information Service (EFFIS), lighting forecasts for potential ignition of wildfires and EO-derived data about fire fuel and historical wildfire events. Such data is combined with dynamic wildfire risk models which can also integrate local data. Wildfire hazard and risk early warnings will provide the tools necessary to help make the Wildland Urban Interface (WUI) more resilient, safe and sustainable in the wildfire hazard context. By thinking European wide but acting locally, SAFERS will maximise the impact of the operational early warnings.
Wildfires constitute one of the most widespread environmental hazards and are regarded as one of the dominant sources of disturbance in natural ecosystems, especially in regions such as the Mediterranean. The high frequency of fires is a natural and recurrent element in the Mediterranean region and has been closely associated with the climatic conditions that dominate in these areas. Over the past few decades, wildfire research has been receiving increasing attention in several regions of the world, including Mediterranean regions, because of the wide range of ecological, economic, social, and political impacts. Accurate knowledge of the geographical and temporal distribution of the fires is also vital in modelling the atmospheric and climatic impacts of biomass burning.
Earth observation (EO) is increasingly being used as a practical solution for the rapid and cost-effective evaluation of impacts from wildfires. The general circumstances that make this technology attractive for this purpose include its ability to provide inexpensively and repetitively synoptic views of large areas, at a spatial resolution suitable for regional and global fire analysis studies, even on inaccessible locations. Operational products and related services have also been developed by international space agencies providing at multiple resolutions and spatial scales burnt area maps, evidencing the high level of maturity of this technology in this regard. Yet, the added value of sophisticated algorithms, such as those based on machine learning (ML), implemented with EO data from the most recently launched satellites in mapping burnt areas remains a topic of ongoing investigation.
The present study evaluates the use of EO imagery from Sentinel-2 and Machine Learning (ML) in burnt area delineation. A further objective has been to assess the added value of cloud platforms such as that of Google Earth Engine (GEE) in automating and implementing operationally the burnt area cartography adopting the investigated algorithms. As a case study was used one of the most devastating wildfire events occurred in the summer of 2021 in the North Evoia region of Central Greece. This specific fire event constituted the greatest fire in the history of Greece over the past decade, with the fire having burnt more than 470 km2 of agroforestry area, corresponding to more than a third of the total devastated area in Greece. A Copernicus Sentinel-2 imagery obtained immediately after the fire outbreak (image acquisition date: 18/08/2021), was combined with pixel-based and object-based (i.e. GEOBIA) ML classifiers to map the burnt area extent in the studied area. In this context, the Rapid Mapping (RM) component of the Copernicus Emergency Management Service, which provides geospatial information immediately after a disaster, was used to validate the ML-derived results in a GIS environment.
Our results evidenced the ability of the synergistic use of Sentinel-2 imagery with the investigated ML techniques particularly of GEOBIA in burnt area mapping. In addition, the study demonstrated the unique advantages of cloud platforms such as GEE, towards operationalisation of the investigated approaches for mapping burnt area over large scales, in a rapid and cost-effective manner. The combined use of ML with EO imagery from sophisticated satellites such as Sentinel-2 and GEE can assist in better understanding the impacts of wildfires at regional scale and this study can provide a methodological framework towards this direction. All in all, the combined use of EO imagery from satellites such as Sentinel 2 missions combined with advanced image processing algorithms can also provide information of crucial importance to policy makers and local authorities. This can be used towards prioritization of activities towards the rehabilitation of the fire-affected areas and relevant fire-management activities. This is especially important when taking into account the significance of fire events to Mediterranean regions such as Greece in ecological, environmental, social as well as cultural/historical context. All in all, our study opens new perspectives in burnt area mapping, as to our knowledge there are few studies published so far exploring the comparison of pixel-based and GEOBIA-based ML algorithms in the context of burned area mapping.
Keywords: Machine Learning, Sentinel-2, Copernicus, Google Earth Engine, burned area
Deep learning for burn severity assessment with Sentinel-2 MSI data
Xikun Hu, Puzhao Zhang, Yifang Ban
Climate change by anthropogenic warming leads to increasing forest fires. Multispectral optical data acquired by Sentinel-2 and Landsat have been widely used in burned area and burned severity mapping [1,2,3]. Present burn severity products (e.g., MTBS) are mostly subjective and dependent on analyst interpretation. dNBR and/or RdNBR data range determines significant thresholds to discriminate between burn severity classes based on the empirical or semi-empirical approaches. The analyst experience with fire behavior and effects provide confidence in selecting suitable burn severity thresholds in a given ecological setting. Deep Learning, on the other hand, is able to extract high-level semantic information for mapping burned severity [1]. Therefore, the objective of this research is to automatically assess the burn severity with Sentinel-2 MSI data with advanced deep-learning models trained on annotated MTBS dataset.
First, we redesign an annotated wildfire burn severity dataset based on open-access Landsat-based satellite imagery from the open-access MTBS database from 2010 to 2019. Re-labeling and preprocessing the raw MTBS data facilitate the damage level classification and maintain the balance between different categories. Some raw MTBS data characterize poor scene quality due to dense cloud cover, ongoing active fires burning, terrain shadows and other obscurations, data gaps, old burned areas, and undesirable sun angles. The preprocessing also excludes low quality scenes that would obstruct the training of deep learning models.
Then we explore the input channels combinations from multispectral bands in burn severity mapping and highlight the importance of the ancillary damage severity index (e.g., dNBR) in the estimation of burn severity levels. The pre-fire false-color composite (SWIR2, NIR, SWIR1 in RGB), post-fire false-color composite (SWIR2, NIR, SWIR1 in RGB), and dNBR composite (dNBR, dNBR2, RdNBR) performs best as 9 input channels. Input bands comparison provides a comprehensive overview of deep learning in the fire-related study with medium-resolution remote sensing satellite imagery, which distinguishes it from the traditional computer vision field mainly with the very-high-resolution image.
In this context, we also compare the performance of U-Net series deep learning models on pixel-wise burn severity levels classification, namely multiple-class semantic segmentation. Particularly, the revised lightweight U2Net model performs well in these architectures with hybrid loss function (i.e., cross-entropy loss and Lovasz softmax loss). The test accuracy shows the mIoU over 0.76, FWIoU over 0.91, and Kappa over 0.88.
Furthermore, we investigate the predictive performance with the arbitrary size of the input image from representative scenes from several states of the U.S. using the revised small U2Net architecture. Different environmental factors related to burn severity are analyzed based on a quantitative measure of the performance.
More experiments are being conducted to assess the impacts of input training patch size (e.g., 256x256, 512x512) and different loss function in model training and validation. The final results will be analyzed based on the feature maps extracted by the deep learning models and present at the Living Planet Symspoium.
References
[1] Farasin, A., Colomba, L. and Garza, P., 2020. Double-Step U-net: A deep learning-based approach for the estimation of wildfire damage severity through sentinel-2 satellite data. Applied Sciences, 10(12), p.4332.
[2] Hu, X., Ban, Y. and Nascetti, A., 2021. Uni-temporal multispectral imagery for burned area mapping with deep learning. Remote Sensing, 13(8), p.1509.
[3] Knopp, L., Wieland, M., Rättich, M. and Martinis, S., 2020. A deep learning approach for burned area segmentation with Sentinel-2 data. Remote Sensing, 12(15), p.2422.
Over the last decades, a large number of geophysical or atmospheric variables have been reported to display abnormal variations before major seismic events, mostly using satellite-based remote sensing. In this context, the Land Surface Temperature (LST) has been one of the most commonly investigated variables. LST can be plainly described as the ground temperature, which could be locally increased if strong tectonic stress led to heat dissipation and surface warming. Many studies have noticed LST anomalies in the span of a month before strong earthquakes on various locations. Similar research has also been carried about other variables, the most important ones being Brightness Temperature (BT) and Aerosol Optical Depth (AOD).
However, all those variables are affected by many factors, the most important ones being of a meteorological nature, which makes them extremely challenging to analyse. For this reason, several data processing methods aim at damping meteorological effects to study other phenomena, such as wavelet transform, which yields the characteristics of the signal in the time-frequency domain. This can be useful to eliminate low frequency seasonal components as well as high frequency noise, leaving only potentially tectonic information.
In this study, we used such an approach, combined with machine learning techniques, to carry a survey on 18 strong earth earthquakes worldwide. For each selected earthquake, we studied thirty days of data before the event date. We also used data corresponding to the same thirty days of the year, at the very same location, during a “normal” (without strong earthquake in the area) year. As a result, the final dataset is composed of two study cases for each site, one positive and the other negative, allowing us to compare both situations. We chose to investigate LST, BT and AOD through daily satellite products from the Land Processes Distributed Active Archive Center collection (LP DAAC). LP DAAC is a component of NASA’s Earth Observing System Data and Information System (EOSDIS) and computes those products using measurements from MODIS (Moderate Resolution Imaging Spectroradiometer) instruments aboard the Terra (1999) and Aqua (2002) satellites.
To evaluate the correlation between earthquakes and abnormal patterns in time series, we first had to build the latter. Their values for each day were computed as the average of all “good” pixels (meaning all pixels satisfying quality constraints such as the absence of significant cloud coverage, etc…) in a radius of R kilometres around the epicenter location. This operation was repeated for thirty days before each earthquake in all 36 (18 positive examples and 18 control examples) study cases of the dataset. Then, we used machine learning to classify time series between two classes: “a strong earthquake happened shortly after the time series” and “no strong earthquake happened after the time series”. This binary classification algorithm was composed of two distinct steps.
First, we used continuous wavelet transform to compute the scalogram of the time series. A scalogram plots the square value of the continuous wavelet transform as a time-frequency matrix, which is why it is a great tool to analyse complex non-periodic signals with patterns occurring at different times and scales. In this regard, if pre-seismic patterns were to happen at a specific variation scale (faster than slow, seasonal trends but slower than random day-to-day variations and noise), a continuous wavelet transform centred around this scale range could highlight the occurrence (or lack thereof) of such behaviours. It might seem intuitive that the scales in which climatic variations happen, and that we want to exclude from the wavelet transform, could vary from one climate to another. As an example, land surface temperature may fluctuate at different speeds in arid areas and in tropical areas, leading to the impossibility of finding one single best scale range for the wavelet transform of both situations. To address this potential issue, we suggest adding the climate type of each studied site to the algorithm’s inputs; such information can easily be found through the Köppen climate classification. This widely used classification system divides climates into five main groups with a letter coding for each, namely A (tropical), B (dry), C (temperate), D (continental) and E (polar).
To take the Köppen classification system into account, we divided our dataset into four subsets, each containing all study cases from a same climate type. The reason why we only needed four subsets, and not five, is because no selected earthquake happened in a polar area, hence eliminating the need for a fifth group. Then, an optimisation algorithm was used to find the best scale range for each variable and for each climate type. Once the scalograms had been computed, they were classified using a k-NN method that only used the relevant climate subset as its training set. Repeating this process for each variable (LST, BT and AOD) produced a three-dimensional vector filled with predictions from each nearest neighbour classifier; this vector was then classified using supervised learning methods. This method achieved 97% classification accuracy.
In light of those results, it would seem that land surface temperature, brightness temperature and aerosol optical depth present abnormal patterns in a 50 km radius area over the future epicentral location of upcoming strong earthquakes, up to 30 days before the event. Such patterns are generally located in a narrow frequency band, which allows them to be monitored with good accuracy.
VIIRS TIME-SERIES FOR WILDFIRE PROGRESSION MAPPING USING TRANSFORMER NETWORK
Yu Zhao, Yifang Ban, Division of Geoinformatics, KTH Royal Institute of Technology
Introduction
As one of the major natural disasters, wildfires have caused severe economic and environmental impacts. Owing to the Visible Infrared Imaging Radiometer Suite(VIIRS) onboarding Suomi National Polar-orbitingPartnership(S-NPP) satellite, active wildfire can be clearly detected by the mid-Infrared band. Moreover, the frequent revisits of S-NPP satellites also enable near real-time monitoring of the wildfire progression using its Near-Infrared band and Short-wave Infrared band. By stacking the active fire detections, a fire progression map can be generated[1][2]. Existing active fire products based on MODIS[3] and VIIRS[4] offer the ability to generate fire progression masks by stacking the active fire pixels during the fire. However, this method is flawed because the detection of the active fires is discontinued. Between two captures of the images, there are possibilities of missing active fires which result in the inaccurate detection of the fire progression. In this study, we aim to improve wildfire progression mapping by using near-infrared and shortwave infrared bands to refine the burned area missed by the mid- infrared band.
To detect wildfire progression from VIIRS time series, a deep learning model based on the transformer network is proposed. Transformer network is a state-of-the-art model in research of natural language processing[5]. It enables long range attention which makes the Transformer encoder powerful to process sequence data. However, with the adoption of the vision transformer[6], the transformer encoder further extends to image classification and image segmentation tasks. Compared to Convolutional neural network (CNN) based method, the advantage of the trans- former is that the representation generated by the transformer encoder is contributed by every pixel in the image. This is called long distance self-attention. At the same time, repre- sentation generated by CNN is only limited to its receptive field which is only a subset of the image. Owing to the long-range self-attention mechanism of the Transformer model, stronger representation can be generated, thus enable finer segmentation results as the output.
VIIRS Data and Study areas
Compared to mid- resolution satellites like Sentinel-2 and Landsat-8, S-NPP offers much more frequent revisit (twice daily). Moreover, compared to other weather satellite sensors such as MODIS onboard Terra/Aqua, VIIRS has higher the spatial resolution thermal bands . Thus, VIIRS time series are considered most suit- able to for near real-time active fire detection and wildfire progression monitoring. In this study, VIIRS images of 20 wildfires from the 2020 fire season in North America are collected to generate the training dataset. Two study areas, Lytton Creek fire in British Columbia, Canada and Dixie fire in California, United States are used to conduct the evaluation. Dixie Fire, which burned 389,837 hectares, is the largest wildfire in 2021., Lytton Creek fire , one of the most devastating fire in British Columbia , burned 83,671 hectares including the town of Lytton. Majority of the burned area is resulting in significant economic impact. All the fire images are manually labeled for validation.
Methodology
The methodology overview is shown in Fig 1. Similar to the Vision transformer, the pro- posed method divides the VIIRS images into image patches, before feeding into the transformer. Since transformer does not record the positional information, position embeddings are upon each projected image patch. Through a transformer encoder, self-attention mechanism enables the generation of each embedding of an image patch to refer to all the image patches. Moreover, through the decoder which consists of several CNN upsampling layers, the embeddings are recon- structed to the segmentation map of wildfire progression.
Expected Results
The proposed transformer network is expected to outperform CNN baseline in evaluation metrics like F1-score and IoU score. The results shall show advantages of using the self- attention mechanism in image segmentation tasks on satel- lite images. Moreover, a fire progression product can be fur- ther generated based on the same methodology by inferring on new wildfires, which enables swift monitoring of the progression of the wildfire.
References
[1] Crowley, M.A. et al, “Mapping and comparing wildfire progressions using freely available, multi-source satellite data,” 2020.
[2] Crowley, M.A. et al, “Multi-sensor, multi-scale, bayesian data synthesis for mapping within-year wildfire progres- sion,” Remote Sensing Letters, vol. 10, pp. 302 – 311, 2018.
[3]Giglio, L., Schroeder, W. and Justice, C., “The collection 6 modis active fire detection algorithm and fire products,” Remote sensing of environment, 2016.
[4]Schroeder, W. et al, “The new viirs 375 m active fire detection data product: Algorithm description and initial assessment,” Remote Sensing of Environment, 2014.
[5]Vaswani, A. et al, “Attention is all you need,” ArXiv, vol. abs/1706.03762, 2017.
[6]Dosovitskiy, A. et al, “An image is worth 16x16 words: Transformers for image recognition at scale,” ArXiv, vol. abs/2010.11929, 2021.
Floods are the largest natural hazard in terms of life loss and economic damage, regardless of their cause. In the United States alone, floods cause billions of dollars in property damage, with an estimate exceeding $78 billion due to fluvial floods in any given year. Effective and immediate disaster response management can reduce the impact of floods, but it requires near real-time information on flood occurrence. To best allocate limited resources and prioritize response actions during hazardous floods, emergency responders need near real-time information on the flood-water extent, typically derived based on Earth Observation (EO) data. Satellite remote sensing offers the only means of monitoring and quantifying flooding extent dynamics. The availability of public domain systematically acquired satellite data archives, together with improvements in algorithms and available computing power have led to huge leaps in recent years in mapping surface water dynamics and flooding. A large proportion of the prior work has relied on optical data, including MODIS and Landsat thus trading high temporal resolution with daily maps in the case of MODIS or higher spatial resolution but the coarser temporal resolution in the case of Landsat. However, the recent availability of NASA’s Harmonized Landsat/Sentinel-2 (HLS, https://hls.gsfc.nasa.gov/) Surface Reflectance Product, a seamless data set combining Landsat 8 and Sentinel 2 observations, is promising in detecting floods at Landsat resolution and 3-day interval. New work in a dryland basin that experiences ephemeral floods showed that large short-lived flooding events were detected only by HLS (the combined dataset) but have been entirely missed by Landsat 8. Here we picked major flood events globally labeled with collocated Harmonized-Sentinel-2 data and applied machine learning models for flood detection. The most important features for flood detection included the SWIR bands, the automated water extraction indices, and vegetation indices. Future work will integrate Sentinel 1 radar data collocated to the Harmonized-Sentinel-2 data for improved detection of floods during cloudy conditions. This work also highlights the importance of existing harmonized data products such as HLS.
1 Introduction
Climate change has led to increases in vulnerability of forest ecosystems, as it is a major contributor to the rise of forest fires and tree species’ inability to adapt to the intensity and frequency of summer droughts [1]. Greece has faced an unprecedented situation during the summer period of 2021, since forest fires caused irreparable environmental and economic losses in many areas. Fire risk assessments are necessary to reduce the impact of natural disasters and support decision making (protective measures, mitigation actions, emergency evacuation procedures, etc.). A plethora of fire risk assessment methodologies can be found in scientific literature. The Analytic Hierarchy Process (AHP) is highlighted among them and is usually combined with Geographic Information Systems (GIS) [2, 3]. Several studies used also satellite imagery and GIS [3], fuzzy approaches [4], artificial neural networks [5] and LIDAR data [6]. The methodology presented in this work, is an integrated approach for fire risk assessment and management planning in peri-urban areas, that are prone to forest fires, through machine learning techniques, geoinformatics and field observations. It is developed under the umbrella of the national research project “Seismic, Fire Flood Risk Assessment in Attica Region, Greece”, led and coordinated by the National Observatory of Athens (NOA) and specifically the Centre of EO Research Satellite Remote Sensing – BEYOND, of the Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS). The approach is a fusion of the research team’s expert knowledge (EMSN041, EMSN059,FireHub,[7, 8]) and of extensive literature review [9, 2]. The technical analysis and processing steps of the methodological framework are presented in the following sections. 2 Suburban fire events in Greece Peri-urban zones, mainly due to uncontrolled urban-sprawl and lack of proper planning, are more vulnerable to wildfires leaving people’s lives and properties, as well as the surrounding natural environment and ecosystem, exposed to increased disaster risk. Attica region, Greece, where the country’s capital is located and almost half of the country’s population along with critical infrastructure, is characterized by plethora of peri-urban settlements surrounded by intense morphological relief with steep slopes, and with pine forests being the dominant land cover type. Moreover, the region’s rich natural landscape results in the intrusion of dense forest parts inside a significant number of settlements. The most severe peri-urban fire in Attica, and Greece in general, was recorded on the 23rd of July, 2018 at Mati, where 103 people lost their lives, 164 were injured and thousands of belongings and trees were destroyed and burned. 3 Methodology
The methodology showcases an integrated approach (Figure 1) for fire risk assessment and management planning in peri-urban areas and is applied in 3 coastal settlements in Markopoulo Municipality of Attika Region. In particular, the fire risk assessment combines i) Fire Hazard Scenarios, ii) Vulnerability and iii) Exposure assessment. The Fire hazard scenarios refer to spatiotemporal simulations of fire spread, generated by applying the FlamMap model. The determination of the most probable ignition points for possible fire outbreaks was derived by the BEYOND’s daily fire risk forecasting machine learning model modified for seasonal forecasts [7, 8] . The vulnerability layer is produced by coupling population (density and age) and building characteristics based on 2011 census data (provided by the Hellenic Statistical Authority). More precisely, the census data are used to produce the population density and age layers; two factors that are being widely used as attributes of social vulnerability [10], as indicators of fire occurrence, as elements of fire evacuation simulations and so on. In addition, the building characteristics are of vital importance to identify vulnerable to fire areas by taking into account the building materials (e.g., wood) as it strongly affects the flammability of the building and thus the fire spread. In this research, the exposure layer refers to the land value layer (€/m2) as an indicator of the qualitative estimation of the possible economic effects in the area, in case of a fire event. Finally, the risk assessment is based on the combination of all the aforementioned layers (vulnerability, exposure, hazard). Thereafter, the high-risk areas ,highlighted in the fire risk map, are visited in-situ and through this process the maps are finally validated and/or updated. At the same, time throughout the field campaign, important areas and critical points (highrisk buildings, traffic congestion areas, population concentration areas etc.) are recorded and included in the mitigation suggestions and management planning. This complete process (from the office to the field and back) forms the basis for the synthesis and recommendation of effective and operational mitigation actions, protective measures and coping / management strategies. 4 Results Several fire risk maps were produced for the studied area by the integration of the vulnerability, exposure and hazard layers, as described in the previous section. The following map, Figure 2, depicts the assessed fire risk in a building block level, produced by the integration of the socioeconomic characteristics of the area, the buildings’ fire resistance and the spatiotemporal fire spread under extreme weather scenarios. This map also highlights important points of interest (schools, hotels, etc.), in terms of their location and the corresponding risk class. Furthermore,another map was produce to depict the risk in critical infrastructure and services of the area (road network, fire brigade etc.). Similar maps were generated for all the possible ignition areas. As an extra step, field research work was conducted in order to further examine the critical areas, validate the risk maps and finally develop useful guidelines for fire risk mitigation measures and emergency response plans during the incidence. The most common issues addressed in the area were the poor quality of the road network along with plenty dead-ends and high street slopes, the absence of fire safe areas around constructions, the use of flammable materials in buildings (wood frame construction) and the absence of fire suppression mechanisms installations (i.e. fire alarms, heat detectors, fire hydrants) in the area. Building upon the aforementioned risk maps, field research work was conducted in order to further examine the critical areas, validate the risk maps and finally develop useful guidelines for fire risk mitigation measures and emergency response plans before, during and after a fire outbreak. The most common and significant issues identified in the area were the poor quality of the road network along with numerous dead-ends and high street slopes, the absence of fire safe areas around constructions, the use of flammable materials in buildings (wood frame construction) and the absence of fire suppression mechanisms installations (i.e., fire alarms, heat detectors, fire hydrants) in the area. Based on all the above, as well as the rest of the findings that were recorded on the field, specific protective measures and mitigation actions were recommended, as well as management plans concerning evacuation, refuge areas, etc. (Figure 3). The study conducted in the area, the produced maps, the proposed management guidelines and the developed geoinformation system, are important operational tools to facilitate management process and decision making. The analysis was organized in a building block level which enhances the planning of the required interventions in the urban environment and the preparation of crisis management strategies.
References
[1] Maria Prodromou, Anastasia Yfantidou, Christos Theocharidis, Milto Miltiadou, and Chris Danezis. Analysis of radar and thermal satellite data time-series for understanding the long-term impact of land surface temperature changes on forests. March 2020.
[2] Hassan Gheshlaghi, Abedi, Bakhitar Feizizadeh, and Thomas Blaschke. GIS-based forest fire risk mapping using the analytical network process and fuzzy logic. Journal of Environmental Planning and Management, 63(3):481–499, 2019.
[3] Shruti Kanga, Laxmi Sharma, Prem Pandey, and Mahendra Nathawat. Gis modelling approach for forest fire risk assessment and management. INTERNATIONAL JOURNAL OF ADVANCEMENT IN REMOTE SENSING, GIS AND GEOGRAPHY, 2:30–44, 01 2014.
[4] Ceren Erdin and Mehmet C¸ a˘glar. Rural fire risk assessment in gis environment using fuzzy logic and the ahp approaches. Polish Journal of Environmental Studies, 30(6):4971–4984, 2021.
[5] Y. Jafari Goldarag, Ali Mohammadzadeh, and A. S. Ardakani. Fire risk assessment using neural network and logistic regression. Journal of the Indian Society of Remote Sensing, 44(6):885–894, February 2016.
[6] Jos´e-Ram´on Gonz´alez-Olabarria, Francisco Rodr´ıguez, Alfredo Fern´andez-Landa, and Blas Mola-Yudego. Mapping fire risk in the model forest of urbi´on (spain) based on airborne lidar measurements. Forest Ecology and Management, 282:149–156, 2012.
[7] A. Apostolakis, S. Girtsou, C. Kontoes, I. Papoutsis, and M. Tsoutsos. Implementation of a Random Forest Classifier to Examine Wildfire Predictive Modelling in Greece Using Diachronically Collected Fire Occurrence and Fire Mapping Data. Lecture Notes in Computer Science, 12573, 2021.
[8] Stella Girtsou, Alexis Apostolakis, Giorgos Giannopoulos, and Charalampos Kontoes. A machine learning methodology for next day wildfire prediction. In IGARSS, 2021.
[9] Leyla Darvishi, Mehrdad Ghodskhah Daryaei, and Abouzar Heidari Safari Kouchi. Comparison of statistical modeling and ahp methods in fire risk assessment in oak forests of iran. Forest research, 9:1–7, 2020.
[10] Palaiologos Palaiologou, Alan A. Ager, Max Nielsen-Pincus, Cody R. Evers, and Michelle A. Day. Social vulnerability to large wildfires in the western USA. Landscape and Urban Planning, 189:99–116, 2019.
The Coordination Group for Meteorological Satellites (CGMS) was formalised on 19 September 1972 by Japan, the USA, Europe and the World Meteorological Organization (WMO), in order to seek common ground on geostationary meteorological satellite programmes. Since then, the CGMS has come a long way, now covering operational and R&D satellite programmes in low-Earth orbit, which, in future, might be complemented by highly elliptical orbit satellites. This ensures that users can easily receive, retrieve and use the data and products for improved forecasting and other applications.
Objective
CGMS goal is to globally coordinate operational and R&D satellite systems for meteorology, oceanography, climate and space weather. This includes protection of in-orbit assets, contingency planning, improvement of quality of data, support to users, and facilitation of shared data access and development of the use of satellite products in key application areas.
A bit of narrative
Since the 50s of last century, a multitude of satellites has been launched collecting observations of the Earth System. Today ability to predict high-impact weather events or to reconstruct the past weather and climate of the Earth system depends critically on these satellite-based observations. While satellites have revolutionized how human being view, understand and predict the planet Earth, and enabled a flourishing downstream market of applications, new scientific, technological and market breakthroughs will further transform the earth observation landscape.
In the last decades, the international coordination across operational meteorological agencies guaranteed the operational exchange and exploitation of a large amount of satellite data. The Coordination Group for Meteorological Satellites facilitates the flow of operational weather and climate data from geostationary and polar orbiting meteorological satellite systems. The changing technological landscape, with the miniaturization of sensors together with advances in small satellites, makes the space observational capacity much more flexible and open to market solutions, and offers new dimensions to global observing needs.
The CGMS coordinates from an end-to-end perspective, through development of multilateral cooperation across all weather satellite operators and with the user community, in particular the WMO and IOC-UNESCO, together with other entities. Additionally, the CGMS has endorsed the space-based observing system component of the Vision for WIGOS (WMO Integrated Global Observing System) in 2040.
In 2010, CGMS and the Committee on Earth Observation Satellite (CEOS), together with the WMO, developed the Joint CEOS/CGMS Working Group on Climate. This working group coordinates activities between the world’s major space agencies in the area of climate monitoring with the overarching goal to improve the systematic availability of Climate Data Records through the coordinated implementation and further development of a global architecture for climate change monitoring from space.
The CGMS is also behind a number of other initiatives, including the Global Space-based Inter-Calibration System (GSICS), the Sustained, Coordinated Processing of Environmental Satellite Data for Climate Monitoring (SCOPE-CM), and the Virtual Laboratory (VLab) related to training on the use of satellite data.
EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites) joined the CGMS in 1987 and has since been its permanent Secretariat. The CGMS Secretariat represents CGMS members in international bodies such as the WMO Congress and Executive Council, the Group on Earth Observation (GEO), and the Space Frequency Coordination Group (SFCG).
A fast changing landscape
There is no doubt that we live in an era of data and of a seamless service approach, where the transitions from one service to the next involve zero or minimal change for the users. Basically, you would like to have a seamless information stream in your mobile-app ranging from today to next month forecast, from your street to your next flight route prediction, from wind or temperature to air-quality information in your garden. Such a seamless user experience requires much larger volumes of satellite data with high velocity and variety, and much stronger integrations between different sources of observations and modeling tools.
Today evolution in data assimilation techniques enables the exploitation and integration of much larger volume of satellite data. Satellite all-sky radiances assimilation, coupled data assimilation and sensitivity to the assimilation window represent, among others, good examples of better integration of satellite data into Earth Prediction systems. On a shorter time-scale, nowcasting applications are developed to obtain the best possible forecasts for the coming minutes up to the next few hours. These are based on spatially and temporally highly resolved satellite and radar observations, that more and more are combined together using machine learning methods or integrated with very high resolution modeling products (i.e., rapid update cycle of high resolution regional models).
As for its international role, Coordination Group for Meteorological Satellites fosters the analysis of the status of meteorological satellite systems and future evolutions, and applications of satellite data to key sectors (i.e., agrometeorology, oceanography, greenhouse gases monitoring).
CGMS plays an important role in integrating user’s needs at the international level and in exploring operational solutions in a fast changing landscape. Key challenges are under discussion in operational satellite agencies (i.e. NOAA, EUMETSAT, Asian agencies), such as future constellation concepts, exploitation of the upcoming meteorological satellites (i.e. EUMETSAT new geostationary and polar systems, new GHG satellite missions in Europe, USA and Asia).
We would like to provide few insights to key guiding questions for the future meteorological satellites and applications:
o Where are the gaps in terms of new observational needs?
o Where are the synergies? New products have not been considered so far and that can be derived from existing or planned missions.
o Where are the breakthroughs? Scientific/ technological breakthroughs that will significantly improve the way we measure Earth System variables, that will enhance our capacity to derive new products. Scientific/ technological breakthroughs that will significantly change the way we develop products and we disseminate them.