The national lockdowns during the COVID-19 pandemic gave to the scientific community the opportunity to evaluate the site-specific role of the anthropogenic emissions on the air quality after drastic reductions of the atmospheric pollution burden due to the restrictions. The studies were carried out around the world and a general benefit to the air healthiness were shown especially in polluted areas such as urban sites. The datasets were generally composed by surface concentrations from in-situ monitoring stations and by columnar densities from remote sensing satellite and ground-based sensors.
In the present work, atmospheric NO2 observations were shown and discussed during the Italian lockdown over the period March-April 2020 considering the same period of 2019 to provide a robust baseline to perform an analysis of air quality changes (Bassani et al., 2021). In order to cover various scenarios of atmospheric pollution, the study area covered the city of Rome, a representative urban site, and its surroundings in the north and north-eastern edge of the Lazio region to include rural sites characterized by a lower concentration of NO2.
The spatial-temporal variation of the NO2 was performed with the tropospheric NO2 columnar content (VCD) provided by the TROPOMI (TROPOspheric Monitoring Instrument) sensor on board of Sentinel 5 Precursor (S5P) satellite (Veefkind et al., 2012). In the frame-work of Copernicus program, the S5P was launched in October 2017 by European Space Agency (ESA) to monitor the density of several compounds (e.g., NO2, CO, O3, CH4, CH2O) with unprecedented spatial resolution (ground pixel at nadir 5.5 × 3.5 km2). In particular, the European Space Agency (ESA) reported a reduction of the TROPOMI VCD, with respect to the monthly average of March 2019 in the major European cities.
In this study, the evaluation of the air quality by surface data is completed considering NO2 and also NO, O3, and CO in all types of monitoring stations of Rome and surrounding areas to compare diurnally, monthly, and yearly changes and effects on air pollution at surface level.
The NO2 variation was evaluated for the pixels including one or more air quality monitoring stations to explore the capability to monitor from space in relation to the surface measurements. The TROPOMI VCD showed a NO2 reduction, larger in urban (−43%) than in rural sites (−17%) as retrieved with the concurrent surface measurements averaging all the traffic and urban background (−44%) and all the rural background stations (−20%). TROPOMI VCD agreed well with surface concentration in Rome (R=0.64 in 2019, R= 0.77 in 2020) and in rural sites (R=0.71 in 2019) except in the rural areas where the weak correlation (R= 0.20) was probably due to very low levels of NO2, which further lowered during the lockdown.
Finally, the satellite NO2 VCD was compared to the concurrent tropospheric density (VCD_surf) obtained by applying the empirical model of Dieudonné et al. (2013) to the surface concentrations taking into account the vertical NO2 gradient in the atmospheric boundary layer (ABL).
Generally, the VCD and VCD_surf showed the same temporal behavior before and during the lockdown with expected variability of satellite NO2 for the atmospheric inhomogeneity of the pixels considered. It should be underlined that in the case of high-level concentration, TROPOMI seemed to underestimate the NO2 concentration data, while an overestimation was exhibited in the case of very low concentration levels, as in rural site during the lockdown, where the instrumental detection limit was reached by the surface concentration as highlighted by the surface data .
The results are part of the ongoing effort to define a site-specific monitoring air pollution system composed of satellite atmospheric products and surface measurements suitable for urban area and also surrounding rural environment where local contribution to the pollution is negligible.
Bibliography
- Veefkind JP, Aben EAA, McMullan K, Forster H, de Vries J, Otter G,Claas J, Eskes HJ, de Haan JF, Kleipool Q, van Weele M,Hasekamp O, Hoogeveen R, Landgraf J, Snel R, Tol PJJ,Ingmann P, Voors R, Kruizinga B, Vink R, Visser H, Levelt PF (2012) TROPOMI on the ESA Sentinel-5 Precursor: a GMES mis-sion for global observations of the atmospheric composition forclimate air quality and ozone layer applications. Remote SensEnviron 120:70–83. https://doi.org/10.1016/j.rse.2011.09.0
- Dieudonné E, Ravetta F, Pelon J, Goutail F, Pommereau JP (2013) Linking NO2surface concentration and integrated content in the urban developed atmospheric boundary layer. Geophys Res Lett40:1247–1251. https://doi.org/10.1002/grl.50242
- Bassani C, Vichi F, Esposito G, Montagnoli M, Giusto M, Ianniello A (2021) Nitrogen dioxide reductions from satellite and surface observations during COVID-19 mitigation in Rome (Italy). Environ Sci Pollut Res 28, 22981–23004. https://doi.org/10.1007/s11356-020-12141-9
Nowadays, dozens of air low costs sensors are commercially available on the market with costs ranging from a few to several hundred euros. At the same time, recent scientific studies have demonstrated through detailed reports independent evaluations of the performance of such sensors showing very promising results [1].
In the framework of the BAQUNIN project (Boundary-layer Air Quality-analysis Using Network of Instruments) [2] founded by ESA, a set of low-cost off-the-shelf sensors has been selected, tested and integrated for building a dedicated Air Quality sensors unit. The main goal of this exercise was to create a portable and cost-effective station able to qualitatively measure the principal pollutant components in the urban areas environment with the possibility to perform also mobile measurements.
In this perspective, the measurements collected during one year of use were compared with in situ high accuracy instrumentation (e.g., Pandora, Sun-photometer, etc.) in order to perform an accurate and rigorous inter-comparison activity. In detail, the following air components and weather parameters have been monitored and cross-compared: NO2, CO, CO2, PM-2.5, PM-10, O3, Air Temperature, Air Humidity, Atmospheric Pressure. In addition, given the very limited portion of measurable air volume with standard in-situ sensors, the integration of a GPS receiver can provide accurate information on the geographic position of the measurement while performing mobile campaign.
Furthermore, the affordable costs of these sensors make it possible to deploy several copies to assess air pollution at a finer spatial resolution rather than could be possible with traditional monitoring systems. This solution is particularly interesting for remote areas or developing countries, which do not have air quality monitoring networks and the necessary budgets for the acquisition of conventional instruments.
In this work we present the preliminary results of one year of data collected from our low-cost air quality station installed in the centre of Rome showing how the information can be exploited after a proper calibration process.
References
[1] Federico Karagulian, Maurizio Barbiere, Alexander Kotsev, Alexander Kotsev et al. “Review of the Performance of Low-Cost Sensors for Air Quality Monitoring”, DOI: 10.3390/atmos10090506
[2] Anna Maria Iannarelli, Annalisa Di Bernardino, Stefano Casadio, Cristiana Bassani, Marco Cacciani, Monica Campanelli, Giampietro Casasanta, Enrico Cadau, Henri Diémoz, Gabriele Mevi, Anna Maria Siani, Massimo Cardaci, Angelika Dehn, Philippe Goryl “The Boundary-layer Air Quality-analysis Using Network of INstruments (BAQUNIN) supersite for Atmospheric Research and Satellite Validation over Rome area”, Bulletin American Meteorological Society (accepted)
The hydroxyl radical (OH) has a vital role in air pollution chemistry, determining the life time of air pollutants and some greenhouse gases. Because of its short lifetime, it is difficult to measure directly. Therefore, estimates of OH rely strongly on chemistry transport modelling, with uncertainties however that may well exceed > 50 %. In this study, Tropospheric Monitoring Instrument (TROPOMI) inferred NO2 ⁄ CO ratios are used in combination with the Weather Research Forecast (WRF) model to estimate the OH concentration in urban plumes over Riyadh. Our analysis focuses on summer (June, 2018 to October, 2018) and winter (November, 2018 to March, 2019) to estimate seasonal changes in OH.
In this study, WRF is setup using three domains at spatial resolutions of 27km, 9km and 3 km. Instead of the encoded chemistry in WRF, a passive tracers transport function is used to speed up the model simulation. Our WRF setup makes use of Copernicus Atmospheric Monitoring Service (CAMS) OH and Emission Database for Global Atmospheric Research v4.3.2 (EDGAR) NOx and CO emissions over Riyadh as input data.
Generally, WRF simulated XNO2 and XCO plumes are in good agreement with the TROPOMI data. WRF overestimates XNO2 by 25 % in summer and 40 % to 50 % in winter compared to TROPOMI. WRF XCO enhancements in plumes from Riyadh are higher than TROPOMI by 5 % to 10 %. The difference between WRF and TROPOMI provides valuable information, allowing us to independently address the uncertainties in CAMS OH and inventory estimated emissions. An iterative least square optimization method is used to optimize WRF with TROPOMI in two ways 1) NO2 ⁄ CO ratio optimization and 2) a “component wise optimization” optimizing XNO2 and XCO separately. In summer, the ratio and component wise XNO2 optimization increase CAMS OH by 36.4 ± 4.0 % and 32.7 ± 3.7 % respectively. The good agreement between the methods < 10 % confirms the robustness of the method. In winter, CAMS OH is increased by 47.5 ± 5.3 %. We infer a seasonal cycle in OH over Riyadh that is higher in summer and lower in winter. This result is in agreement with the application of the Exponentially Modified Gaussian function fit (EMG) method of Beirle et al., (2011) to TROPOMI NO2. Overall, our results confirm that OH concentrations in urban plumes can be derived reliably using the TROPOMI.
Air quality monitoring is going towards a combined use of surface measurements and remote sensing satellite and ground based data, aiming to an improved spatial/temporal resolution of the atmospheric nitrogen dioxide (NO2) datasets.
Surface concentration and columnar density, as measured at ground, supply a complete site-specific characterization including the daily pattern of the pollutant. On the other hand, the satellite capability to map the pollutant columnar density over an extensive area, allows to identify the local emission sources and to analyse their surrounding area, even if monitoring stations are lacking.
Among the currently flying COPERNICUS Sentinel missions, the TROPOspheric Monitoring Instrument (TROPOMI) on board the Sentinel-5 Precursor satellite provides nearly global coverage in one day with the unprecedented (nadir) spatial resolution of 3.5x5.5km2.
In a previous study (Bassani et al., 2021), the tropospheric NO2 vertical columnar density (VCD_tropo) and NO2 concentration (c_no2) available from the Regional Agency for the Protection of the Environment of the Lazio Region (ARPA) were combined to evaluate the decrease of the NO2 during the Italian COVID-19 lockdown at the location of the monitoring stations. The correlations between the surface concentration and tropospheric density were significant in both urban and rural sites, where monitoring station are located except in the case of very low values of c_no2 and VCD_tropo occurred during the lockdown in the rural sites.
In this contribution, we describe the results obtained through the analysis of the VCD_tropo over the entire Lazio region (central Italy), aiming at the evaluation of the impact of Rome’s pollution, considered as the main emission source and heavily affected by anthropogenic pressure, on the surrounding coastal, rural and mountainous areas.
A further improvement of the regional air quality monitoring can be obtained by using in synergy chemical and physical data, to evaluate the spatial and temporal variation of the pollutant in a large area. To this aim, an added value of this study is the use of data available from three observational sites belonging to BAQUNIN (Boundary-layer Air Quality-analysis Using Network of Instruments) super-site, equipped with ground-based remote sensing instruments validating the satellite products and used for atmospheric research studies. The three stations are located along the Tiber valley towards urban (APL, Rome), semi-rural (ISAC-CNR, Rome) and rural (IIA-CNR, Montelibretti, Rome) site (Iannarelli et al., 2021).
BAQUNIN is one of the first observatories in the world to involve several passive and active ground-based instruments installed in multiple measuring locations, managed by different research institutions, in a highly polluted urban environment not far from the Tyrrhenian coast.
The super-site has been promoted by the European Space Agency to establish an experimental research infrastructure for the validation of present and future satellite atmospheric products and the in-depth investigation of the planetary and urban boundary layer. The datasets are available through international networks and directly at the website https://www.baqunin.eu.
In particular, a Pandora spectrometer is installed in each of the three sites operating within the Pandonia Global Network (PGN, https://www.pandonia-global-network.org/), a worldwide ground-based remote sensing network for air pollution monitoring and validation of the satellite trace gases products.
The APL site is the main measuring headquarters and is equipped of ground-based instruments dedicated to monitoring the main trace gases and aerosols along the atmospheric column and at the surface. In addition to Pandora, a Brewer spectrophotometer is operating since 1992 in the APL site. Concerning the nitrogen dioxide, the co-located sensors provide an opportunity to compare column amounts from independent instruments as discussed in Diémoz et al., 2021.
Finally, the combination of the ground-based and satellite products with surface measurements can also provide tools for air quality analysis and monitoring as well as local environmental policies on air pollution control.
Bibliography
- Bassani et al., 2021, Nitrogen dioxide reductions from satellite and surface observations
during COVID-19 mitigation in Rome (Italy), Environmental Science and Pollution Research (2021) 28:22981–23004 https://doi.org/10.1007/s11356-020-12141-9
- Iannarelli et al., 2021, “The Boundary-layer Air Quality-analysis Using Network of INstruments (BAQUNIN) supersite for Atmospheric Research and Satellite Validation over Rome area”, Bulletin American Meteorological Society, accepted
- Diémoz, H., Siani, A. M., Casadio, S., Iannarelli, A. M., Casale, G. R., Savastiouk, V., Cede, A., Tiefengraber, M., and Müller, M.: Advanced NO2 retrieval technique for the Brewer spectrophotometer applied to the 20-year record in Rome, Italy, Earth Syst. Sci. Data, 13, 4929–4950, https://doi.org/10.5194/essd-13-4929-2021, 2021
A long-term tropospheric ozone time series has been generated for the tropical band (20°S to 20°N) based on convective cloud differential algorithm (CCD). Tropical tropospheric ozone columns were retrieved from several European sensors starting with observations by GOME in 1995 and including data from SCIAMACHY, OMI, GOME-2A and GOME-2B. It has now been extended by DLR with data from GOME-2C and TROPOMI and now encompasses 25 years. The tropospheric ozone retrieval for all data sets is based on the total columns retrieved with the GODFIT algorithm and associated cloud products.
There are however some differences between the different tropospheric columns from the different sensors which have to be corrected for. For the CCD time series, we used SCIAMACHY data as reference and fitted an offset and a trend correction to the data of the other sensors. We estimated the trend based on the long-term time series. For the tropics an overall trend of +0.7 DU/decade was found in the data set until 2019, varying locally between -0.5 and 1.8 DU/decade.
The second data record combines total ozone columns from TROPOMI with BASCOE stratospheric ozone profiles. BASCOE stratospheric ozone data is constrained by assimilated Aura MLS observation and it is provided with 3-hour time resolutions in NRT. We used the BASCOE NRT data set to calculate the stratospheric ozone columns for every day from April 2018 to December 2020 and subtracted it from the respective NRT total columns observed by TROPOMI.
The TROPOMI NRT total ozone product was updated recently including a new surface albedo retrieval algorithm. An internal reanalysis of the NRT data was used to create a consistent tropospheric ozone data set. A comparison to ozone sondes showed a good agreement for most part of the world.
For the GEMS validation the TROPOMI total ozone NRT algorithm is applied to selected the GEMS data. Also, the tropospheric ozone column might be retrieved based on the TROPOMI-BASCOE algorithm described above.
Both the CCD and the TROPOMI-BASCOE tropospheric ozone data will be presented. Furthermore, first results for total and troposheric ozone columns of GEMS data using the TROPOMI algorithms might be shown.
In this paper, we present part of the results obtained from airborne measurements above Bucharest, Romania. The measurements are performed in the context of Sentinel 5P calibration/validation, and the strategy of these measurements is to have recurrent flights for the extension of one year.
The idea behind this is to observe the variability of the city’s pollution, as the sources changes from one season to another. In a previous study from TROPOMI annual observation, were observed high variability of NO2 values including between different days of the week.
Measurements started in the summer of 2021 and covered so far summer and late autumn days. The region of interest is Bucharest, capital of Romania, and its metropolitan surroundings, a region with a 112 km perimeter and 850 km2 area. The city is subject to infringement from the EU regarding poor air quality. The main sources of pollution are represented by the city’s power plants and the car traffic.
Apart from airborne measurements, were performed ground-based, both from a fixed location and mobile (e.g., from a car). Fixed ground-based measurements are performed from a location 12 km distance from Bucharest city centre, from the site MARS (Măgurele centre for Atmosphere and Radiation Studies), a site equipped with lidars (e.g., for aerosols or wind profiling), cloud – radars, microwave radiometer, radiation station, and different in-situ instruments (for aerosols and gases), site part of ACTRIS research infrastructure.
Airborne measurements were performed using research aircraft Britten-Norman BN-2 Islander, aircraft that went thru a certification by EASA. The modification consisted of an air inlet, mounted on top of the aircraft, and a nadir window for the remote sensing instrument. Georeferencing is done using an IMU, with a frequency of 1Hz. Now the aircraft can accommodate several instruments that can sample in situ, capable to measure aerosols (e.g., Aerosol Particle Sizer and Nephelometer) or trace gases (e.g., formaldehyde, methane, carbon monoxide and dioxide, water vapour, and nitrogen dioxide), and remote sensing (custom made DOAS whiskbroom imager for high-resolution mapping of SO2 and NO2 column concentration).
For remote sensing measurements, the cruising altitude is 3.5 km, and the ground speed is around 200 km/h. The flight duration is about 3 hours, with less than 1h needed for the aircraft to enter the region of interest. It consists of approximately 10 flight – legs, and a sounding is performed above MARS, for vertical profiling of the atmosphere. Two hours are needed to sample the entire area and is being centred with the time of the S5P overpass on the region. In the case of two overpasses for TROPOMI in the study area during the daytime, it is targeted the closest to noon.
We will present trace gas distributions measured by the airborne imaging limb sounder GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere), the airborn demonstrator for the proposed CAIRT (The Changing-Atmosphere Infra-Red Tomography Explorer) Earth Explorer 11 mission. In this study, we focus on biomass burning pollution trace gases in the upper troposphere and lowermost stratosphere (UTLS). The origin of these polluted air masses is estimated with help of backward trajectories. The vertically resolved GLORIA cross sections are applied to evaluate simulation results from the CAMS (Copernicus Atmosphere Monitoring Service) model.
Simultaneous GLORIA observations of peroxyacetyl nitrate (PAN), ethane (C2H6), formic acid (HCOOH), methanol (CH3OH), and ethylene (C2H4) above the South Atlantic in September and October 2019 were measured during the SouthTRAC (Transport and Composition in the Southern Hemisphere Upper Troposphere/Lower Stratosphere) campaign with the German High Altitude and Long range research Aircraft (HALO). GLORIA was part of the HALO payload, mounted in the belly pod of the aircraft for transfer flights between Germany and Argentina, and for local flights from Rio Grande, Argentina. On 8 September 2019, during a transfer flight, a filamentary structure with maximum volume mixing ratios (VMRs) of 900 pptv for PAN, 1100 pptv for C2H6, 800 pptv for HCOOH, 4000 pptv for CH3OH, and 200 pptv for C2H4 has been observed at altitudes between 7 km and 14 km. On 7 October 2019, during another transfer flight, one large plume has been measured at altitudes between 8 km and 13 km for all discussed species, except C2H4, besides smaller enhancements. Maximum VMRs of 1000 pptv for PAN, 1400 pptv for C2H6, 500 pptv for HCOOH, and 4500 pptv for CH3OH have been observed. With help of backward trajectories, we show that measured pollutants are likely originating from South America and central Africa. In the same regions, elevated PAN VMRs are visible at the surface layer of the CAMS model during the weeks before both measurements. In comparison to simulation results of the CAMS reanalysis data interpolated onto the GLORIA measurement geolocations, we show that the model is able to reproduce the overall structure of the measured pollution trace gas distributions. For PAN, the absolute VMRs agree with the GLORIA measurements, too. However, C2H6 and HCOOH are generally underestimated by the model, while CH3OH and C2H4, the trace gases with the shortest atmospheric lifetimes of the discussed pollution trace gases, are overestimated by CAMS. We assume the transport and locations of emission of the CAMS model to be appropriate, while emission strengths and atmospheric loss processes of all discussed trace gases except of PAN possibly should be improved for the model. Further, our results emphasize the need of global high resolution observation of these pollutants.
The TROPOspheric Monitoring Instrument (TROPOMI) onboard the Sentinel-5 Precursor (S-5P) satellite, launched in 2017, measures the total column concentration of trace gas Carbon Monoxide (CO) as one of its primary mission objectives. The CO in the atmosphere can be due to incomplete combustion during wild fires or industrial activities and has an average lifetime from days to two months. The TROPOMI monitors CO daily on a global scale and at a high spatial resolution of 7km x 7km, improved to 5.5km x 7km in August 2019. In this study, we create a database by setting up an automated processing of the TROPOMI CO dataset to identify CO pollution events due to biomass burning and estimate the corresponding emissions using cross-sectional flux method (CFM). The influence of plume height and the resolution of velocity fields on emissions are also investigated. A new plume detection algorithm based on image processing was developed. It uses fire counts from Visible Infrared Imaging Radiometer Suite (VIIRS) and TROPOMI CO dataset as inputs. Emissions are estimated for the identified plumes by the CFM using a plume height of 100m and a height based on the simulation of tracer particles by 3D Lagrangian model. The source locations for Lagrangian simulation are based on VIIRS fire counts and injection height from the Global Fire Assimilation System (GFAS). 3D velocities at model level 137 (ERA5 MARS) are used to simulate tracer particles. We demonstrate the quality and validity of our approach by investigating a database of biomass burning events and their emissions for Australia (Oct 2019) and the US (Sept 2020). The automatic plume detection algorithm detected a total of 129 and 27 plumes in Australia and the US, respectively. These plumes were visually inspected and >75% were found to be good for computing emissions. The emission estimates for different plume heights for Australia were found to have a mean difference of 20% with 30% standard deviation. Additionally, emissions based on height from particles simulations for two randomised plumes using low and high resolution velocities fields from ERA5 and Weather Research and Forecasting Model (WRF), respectively, were found to be in good agreement. Based on these findings, the emission estimation algorithm will use CFM with plume height computed from the Lagrangian simulation. Thus, an algorithm to create a database of CO pollution plumes and emissions has been developed by automatically integrating satellite and meteorological databases. This study will also act as a starting point to continuously process TROPOMI CO data in pseudo-real-time to build up a database accessible for the scientific community.
Air pollution is a severe threat to public health and has been proven as the main cause of many fatal diseases. Across Europe, air quality in Poland is one of the worst. Thus, there is a strong demand for air quality monitoring in Poland in order to raise public awareness and to develop policies that will mitigate this huge problem. The Chief Inspectorate for Environmental Protection (GIOS) performs in-situ measurement of pollutants in a few selected, densely populated locations in Poland. In smaller cities, there is no information on the level of air pollution. This study have contributed to support the air quality monitoring in Poland by means of satellite data.
The main objective of the research was to verify potential of the Sentinel-5 Precursor (Sentinel-5P, S-5P) satellite Tropospheric NO2 Column Number Density product (NO2 TVCD) generated by the European Space Agency (ESA), to support air pollution monitoring in Poland. In this respect, the product was compared to in-situ measurements provided by the GIOS. The secondary objective of the project was to establish a relationship between air pollution (based on Sentinel-5P products) and meteorological data provided by European Centre for Medium-Range Weather Forecasts (ECMWF) by ERA5 reanalysis model. Because of differences between satellite and ground-based data, vertical profiles of NO2, provided by Copernicus Atmosphere Monitoring Service (CAMS) were analysed to established relationship between columnar data provided by S-5P and ground-based concentration provided by GIOS.
Furthermore, the study determined the limitations of the Sentinel-5P products for monitoring air pollution in Poland in terms of spatial resolution and temporal sampling, which could be significantly reduced in winter due to ubiquitous cloud cover. Ultimately, the entire analysis were performed in the Google Earth Engine (GEE) computing cloud which is an open source solution that will allow anybody to repeat the analyses performed within this project.
The prediction model which simulates ground-based NO2 concentration based on Sentinel-5P data, meteorological data and vertical profiles data was created. It was claimed that model predicted a concentration of NO2 with R2=0.50 and mean absolute error (MAE) = 3,58 µg/m3 (mean ground-based NO2 concentration over Poland is about 9 µg/m3) which is 39% error (R2=0.50). It was found that proposed model estimated NO2 concentration with more accuracy than CAMS (R2=0.44; MAE=4.8 which is 50% error). Then, there was performed estimation of NO2 concertation over Warsaw as a case of highly populated area. It was found that models predicted air pollution with MAE=5,17 which was 33% error. NO2 TVCD derived from Setinel-5P, boundary layer height, wind speed, surface net solar radiation and distribution of NO2 within the profile were the most important variables within model.
To sum up, there was created model based on satellite, meteorological and reanalysis data which predicts a NO2 ground concertation in Poland with 61% accuracy. Compared to CAMS it is 11% better. It creates new opportunities to monitor air pollution in Poland in spatial way. However, there are limitations due to clouds, so the model could not be used every day. There are going to be performed further analysis to verify potential of model to monitor NO2 concentration in other European countries.
By the 1st July 2020, there were in excess of 10 million confirmed cases of COVID-19 worldwide. Of these cases, it was reported that the virus had claimed an estimated 511,037 lives. In an effort to halt the spread of the disease, governments across the globe put into place a range of measures based on ‘social distancing’ and ‘self-isolation’, which resulted in many industries suspending operations and most citizens (i.e., non ‘key-workers’) staying in their homes. As such, anthropogenic activity around the globe decreased rapidly, to such an extent that emissions of air pollutants began to decline dramatically, with this period now being referred to as an ‘anthropause’. In the early stages of the pandemic, remote sensing data from satellites indicated that nitrogen dioxide (NO₂) concentrations had fallen by as much as 30% across China and by as much as 50% across areas of central Europe. Early work using in-situ measurements confirmed these findings, with studies from China, Korea, India, the USA and Europe all reporting decreases in ambient NOx concentrations. The UK government advised that the general population should avoid ‘non-essential’ travel and social contact, on 16th March 2020. At this point, the total number of confirmed cases in the UK had surpassed 1500. Subsequently, on 23rd March 2020, the government announced a UK-wide partial ‘lockdown’, to contain the spread of the virus. The Health Protection (Coronavirus, Restrictions) (England) Regulations 2020, the statutory instrument to enforce the lockdown, was enacted shortly after.
In this work, we combine findings from the University of Brighton’s Brighton Atmospheric Observatory and the ESA's Sentinel-5P satellite, to investigate changes in tropospheric Nitrogen Dioxide concentrations in the South East of the UK during the COVID-19 pandemic.
BAO (formerly the JOAQUIN Advanced Air Quality reSearch Laboratory; JAAQS) was established in Brighton in 2015. It comprises a climate controlled, clean laboratory instrumented with a suite of state-of-the-art analytical instruments for making detailed, real-time measurements of tropospheric composition. It is equipped with long-path Differential Optical Absorption Spectroscopy (DOAS; Opsis AB) for remote sensing of trace gas parameters (path length ~ 300 m), including NO₂, O₃, SO₂, formaldehyde (HCHO), nitrous acid (HONO) and benzene (C₆H₆; indicative data only); total and size-resolved particle counters (7 ≤ n ≤ 1000 nm; TSI 3031 and TSI 3783); a black carbon monitor (Thermo MAAP 5012); a PM2.5 monitor (Met One ES-642); and a meteorology station (Campbell Scientific; data from 01/01/2019). BAO is situated in a suburban background environment, roughly 5 km from Brighton city centre. Data were recorded at 5-minute averaging intervals and were screened for service periods and anomalies prior to analysis.
Level-2 (L2) TROPOMI NO₂ products were sourced from the Sentinel-5P Pre-Operations Data Hub for dates between 23rd March and 22nd April of both 2019 and 2020. The pixels covering the South East quadrant of the UK were extracted from each dataset and filtered to remove problematic and cloud-influenced observations, i.e., where pixel values were negative or associated with a Quality Assurance flag < 0.75. The filtered data were appropriately averaged, and units converted to molec m^-2. Percentage change in tropospheric column NO2 values over the region were determined by expressing the concentration difference between 2020 and 2019 as a fraction of the 2019 value. The values of cells within the filtered daily raster files which covered Brighton and Hove were recorded next to the appropriate date, which meant these values could be plotted alongside DOAS measurements.
The attached figure shows regional daily average NO₂ concentrations as recorded by TROPOMI over (a) the period 25/03/2019–22/04/2019 (i.e. the pre-pandemic baseline) and (b) 23/03/2020–20/04/2020 (i.e. post-implementation of lockdown restrictions). The percentage change between the two periods is also shown (c), as are the locally integrated values over the city of Brighton and Hove, plotted alongside long-path DOAS measurements made on the ground (over a total path length 300 m) for the same time period (d). The data shown in Figure 1 confirms findings from analysis of in-situ monitor observations made by the Sussex-Air Network and DEFRA Automatic Urban and Rural Network (AURN), extending the reach of the data capture to the entire South East of the UK on a 7 × 7 km resolution scale. In-line with the in-situ monitors, TROPOMI measured a decrease in the concentrations of NO2 across the entire region during the lockdown, with the regional average value falling by 33%, from 4.9 × 10^16 to 3.3 × 10^16 molec m^-2. Figure 1(c) shows that the largest changes in NO2 were observed in the centre of the region, in the areas surrounding London and at certain coastal locations.
As seen in Figure 1(d), when integrated across the city scale (Brighton and Hove in this instance), TROPOMI is relatively successful in capturing local daily variations when compared to remote sensing conducted on the ground, in this case by long-path DOAS. Here, TROPOMI measured NO2 values across the city during the 2020 lockdown period to be 59% of those measured over roughly the same time period the previous year (with mean values falling from 4.4 × 10^16 to 2.9 × 10^16 molecule m^-2), comparing favourably with DOAS, which recorded NO₂ values that were ~64% of those measured during the previous two years over roughly the same time period.
The methodology is also extended to London, Birmingham and Manchester, which are the 1st, 2nd and 6th largest cities within the UK.
Atmospheric reactive nitrogen species such as ammonia (NH3) and nitrogen dioxide (NOx) are known hazards to both environmental and human health and can be directly linked to issues such as soil and water acidification and eutrophication, and various respiratory based illnesses. While the hazardous effects are well known, the atmospheric budget and fate of both species are still relatively uncertain, especially for ammonia. The uncertainty in the budget follows from the short lifetime of both species and a lack of in-situ measurements networks directly measuring fluxes of the species, although at least in the case of NOx measurements are becoming more common. More recently (last decade) satellite observations are also being used to evaluate the atmospheric budgets of both species. Previous studies have shown the value of the satellite observations for determining emissions of large point sources, biomass burning, and regional totals using various simple to more complex inversion systems.
In this study, we show spatially and temporally varying ammonia (CrIS) and nitrogen dioxide (Tropomi) emission estimates derived using an approach solely based on the satellite observations and local meteorology. For both reactive nitrogen species we demonstrate the ability to constrain annual emissions at county-to provincial levels by comparing the constrained emissions to regional inventories such as CAMS-GLOB-REG and global inventories such as CAMS-GLOB-ANT. We show that for the well-known intensive agricultural and industrial regions the spatial patterns in the satellite emission estimates are consistent with the emission inventories, while there are many others source regions that either do not match or are missing from the current inventory. Similarly, temporal emission patterns match for some of the more intensive agricultural regions, especially those with a history of in-situ measurements, while showing large differences for other regions. Furthermore, we will show the value of such emission estimates for regional air quality modelling by incorporating the satellite-derived emission fields in the LOTOS-EUROS chemistry transport model and compare the resulting simulated concentrations with in-situ observations.
A widE-Ranging investigation of the first COVID-19 LOCkdown effects on the atmospheric composition in five Italian Urban Sites (called AER–LOCUS study) was carried out in 2020 with the aim of integrating, for the first time in Italy, observations from different platforms. Particle and gas concentrations from in situ sampling, column aerosol and gas properties from photometers and spectrometers belonging to different observation networks, aerosol vertical profiles from ceilometers, as well as TROPOMI NO2 determinations, were analysed at five sites distributed along the whole Italian territory: Aosta, Milan, Bologna, Rome, and Taranto.
The homogeneity and comparability of products among each ground-based platform is fundamental to perform a study at a large spatial scale. In situ measurements were collected by the Regional Environmental Protection Agencies (ARPA) using standard equipment along the Italian territory. Vertically resolved aerosol data were derived from the Italian network ALICEnet (Automated LIdar-CEilometer network, http://www.alice-net.eu/) whereas ground based remote sensing of aerosol columnar properties by photometers comes from two different networks: AERONET (https://aeronet.gsfc.nasa.gov/, Holben et al., 1998) and the European SKYNET (ESR, www.euroskyrad.net, Campanelli et al., 2012). Studies of intercomparison between products from the two networks are regularly performed through devoted intercomparison campaigns. The harmonization and quality-control of the datasets from the different networks is also the aim of the joint research project MAPP “Metrology for aerosol optical properties” (funded by the European Metrology Programme for Innovation and Research-EMPIR), assuring that the photometers dataset considered in AER-LOCUS is homogeneous and comparable.
This synergistic network of measurements, together with the examination of differences in meteorological conditions occurring in 2020, allowed to identify the medium- and long-range transport cases, and isolate the variations of the main atmospheric pollutants due to the lockdown restrictions. This key point represents an additional, novel aspect of this study. In fact, the measured concentration changes are not always due to variations in local emissions, as non-local particles and gases can be carried from distant places and the atmospheric structure and circulation can contribute to reduce or enhance the pollutant accumulation.
Four different types of medium-to-long-range transport over Italy are identified during the lockdown period, affecting aerosol optical depth (AOD), PM10, PM2.5, Black Carbon (BC) and NO2 concentrations: fires plumes from Eastern Europe and Balkan Area, desert dust from the Caspian area and from the Sahara, and pollution from the Po Valley. Once these “non local” events are identified and excluded, the variation of gas and particle concentrations occurring during the investigated period is calculated with respect to the reference period (2015-2019). A general decrease of PM10, PM2, BC, NO2, and benzene (about -50%) concentration is found. A positive variation of PM2.5 is conversely found during March in the southern sites due to some stagnation events, and a strong increase of benzene (up to +104%) in the industrial area of Taranto. Ozone is found to increase by an average of about 30% in all urban sites. The removal of the long-range transport contributions affects the variations with respect to the reference period reducing the concentrations by up to 22% for PM10 and 29% for PM2.5 in the northern sites, and 18% for PM10 and 16% for PM2.5 in the southern sites. For NO2 the reduction due to the removal is up to 14% in Milan and 6% in the southern sites, while for the AOD it is up to 70% in Aosta and 50% in Rome.
This work has received funding from the European Metrology Programme for Innovation and Research (EMPIR) co-financed by the Participating States and from the European Union’s Horizon 2020 research and innovation programme within the joint research project 19ENV04 MAPP “Metrology for aerosol optical properties”.
Relationships between Covid-19 features and air pollution based on supervised machine learning models
Authors: Lixia Chu 1, Alessandro Crivellari 2, Christoph Lofi 1
Affiliations: 1 Faculty of Electrical Engineering, Mathematics, and Computer Science, Delft University of Technology (Tu Delft), the Netherlands
2 Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
Emails: L.Chu-1@tudelft.nl; crivellari@sustech.edu.cn; C.Lofi@tudelft.nl
Long-term exposure to ambient air pollution is one of the main public health concerns worldwide. Exposure to air pollution is highly related to a range of diseases including respiratory and cardiovascular diseases, such as lung cancers, asthma, diabetes, irregular heartbeat, stroke and obesity [1-3]. The outbreak of the pathogenic agent of coronavirus disease 19 (Covid-19) has led to a large number of deaths worldwide, and previous studies have pointed out how the long-term exposure to air pollution may have an impact on its high death rate [4]. Moreover, the hospitalization rate and infected population numbers are central indicators for lock-down policy-making, indicating whether the local medical system is able to handle the increasing infected population number through its available intensive care facilities. In fact, predicting hospitalization is vital for authorities and policymakers. We hereby hypothesize that high air pollutants concentration leads to a rise in the hospitalization rate under the influence of Covid-19 outbreaks. We attempt to predict such hospitalization numbers for past data by means of a task-specific optimized machine learning model, after we integrate social, economic, cultural, and other environmental features in future with an ongoing project we are conducting. While such a prediction model cannot directly be used for predicting the future development of the pandemic, analysing it still gives valuable insights on the influence of various environmental features had on it in the past.
Air pollution is a mixture of a large number of chemical compounds such as CO2, CO, NOx, SO2, O3, heavy metals, and respirable particulate matter (PM2.5 and PM10); the main sources of such pollutants are identified as vehicle traffic, heating systems, and industrial plants [5]. Previous studies focused on the relationships between the variables of pandemic with the air pollutants information. Among all the air pollutants, NO2 and respirable articulate matter are highly related to the pandemic variables [6-8]. In our research, we extract the air pollutants information (CO, NO2, CH4, SO2) from the Sentinel-5P TROPOMI sensor, and integrate it with open-access data on Covid-19 features (mortality, infection rate, intensive care rate, etc). The air pollutant data is processed from the Sentinel-5P data catalog provided in Google Earth Engine. We therefore aim to ascertain the relationships between hospitalization and air pollutants concentration with the incidence of Covid-19. In particular, our ultimate research purpose is to develop a machine learning model to uncover the relationships between a mixture of features derived from air pollutants and Covid-19 related information, at municipality scales in Germany and the Netherlands. The relationships provide important clues on understanding how air pollution may affect on hospitalization rate and other features of Covid-19, through the evidence of potential low hospitalization or low mortality with better air quality. The output will deliver key information regarding public health effects and control of emission in Germany and the Netherlands.
Specifically, on a temporal scale, we aggregated daily Covid-19 data and four air pollutant measures into weekly measures. On a spatial scale, the air pollutants were aggregated based on each municipality in Germany and the Netherlands to match the Covid-19 features. A choice of machine learning models were trained and evaluated on historical data (from March of 2020 to Oct of 2021), using features comprising weekly hospitalizations, death rate, and infected rate, tropospheric NO2 concentration, CO, SO2, CH4 concentrations. In addition, a post-processing analysis using machine-learning explainability methodologies was carried out to mine potential relationships between hospitalization attributes and specific air pollution concentration features. By processing municipalities as separate spatial entities, the results are intended to highlight hospitalization disparities and pollutants’ effect diversities among different geographic areas.
By highlighting the relationships between air pollutant concentrations and incidence of Covid-19 with the hospitalization rate, and illustrating the hospitalization disparities among municipalities, our results provide key information regarding policymaking on urban emission control and public health at municipality level. When integrating other Covid-related features, our models could offer support to policymakers on effective lock-down decisions and health system management.
Keywords: Air pollutant, Covid-19, supervised machine learning models, Google Earth Engine.
Reference
1. Bernstein, J.A., et al., Health effects of air pollution. Journal of allergy and clinical immunology, 2004. 114(5): p. 1116-1123.
2. Brunekreef, B. and S.T. Holgate, Air pollution and health. The lancet, 2002. 360(9341): p. 1233-1242.
3. Strak, M., et al., Long-term exposure to particulate matter, NO2 and the oxidative potential of particulates and diabetes prevalence in a large national health survey. Environment international, 2017. 108: p. 228-236.
4. Ogen, Y., Assessing nitrogen dioxide (NO2) levels as a contributing factor to coronavirus (COVID-19) fatality. Science of The Total Environment, 2020. 726: p. 138605.
5. Vineis, P., et al., Air pollution and risk of lung cancer in a prospective study in Europe. International Journal of Cancer, 2006. 119(1): p. 169-174.
6. Gautam, S., COVID-19: air pollution remains low as people stay at home. Air Quality, Atmosphere & Health, 2020. 13: p. 853-857.
7. Vîrghileanu, M., et al., Nitrogen Dioxide (NO2) Pollution monitoring with Sentinel-5P satellite imagery over Europe during the coronavirus pandemic outbreak. Remote Sensing, 2020. 12(21): p. 3575.
8. Omrani, H., et al., Spatio-temporal data on the air pollutant nitrogen dioxide derived from Sentinel satellite for France. Data in Brief, 2020. 28: p. 105089.
The coronavirus COVID-19 pandemic started in spring 2020 in Poland and has affected many aspects of human well-being including air quality. The present study aims at quantifying this effect by means of in-situ air quality and aerosol optical depth measurements as well as novel data sets derived from the TROPOMI sensor mounted onboard the Sentinel-5P satellite. The analyses were performed for both urban and non-built-up areas across the whole of Poland accompanied by Warsaw (urban site) and Strzyzow observatory (a background site) as regional case studies. The main focus of the research was atmospheric concentrations of NO2, PM2.5 and aerosol optical depth (AOD) during the imposed governmental restrictions and lockdown in spring 2020 (March until June).
The results obtained revealed that mean PM2.5 concentrations in spring 2020 for urban and non-built-up areas across Poland and for the Warsaw case study were 20%, 23%, 30% lower than the 10-year average respectively, and were 8%, 21%, 5% lower than in 2019. Analogous mean estimates for NO2 concentrations were lower than the 10-year average by 20%, 17%, 30% and were lower than in 2019 by 10%, 9%, 12%. The corresponding estimates derived from the TROPOMI tropospheric NO2 column number density (TVCD) revealed 9%, 4%, 9% reductions in 2020 as compared to 2019 for the urban areas, non-built-up areas and for Warsaw, respectively. Regarding mean AOD at 550 nm, retrieved from MERRA-2 reanalysis, it was found that for the whole of Poland during spring 2020 the reduction in AOD values as compared to the 10-year average was 15%, and 14% in relation to 2019. Analogous mean AOD estimates for the Strzyzow AERONET observatory were 33% in comparison to the 10-year average and 39% in comparison to 2019. In addition, at Strzyzow decreases in the aerosol scattering coefficient and equivalent black carbon (eBC) were recorded in 2020.
Due to the fact that atmospheric pollution concentration and emission rates (especially from heating systems) are a function of atmospheric parameters, the weather conditions in 2019 and 2020 relative to long-term means were verified. The mean annual temperature anomalies were +2.0°C and +1.8°C, respectively, for 2019 and 2020. The winter of 2020 was extremely warm (4.1°C above the long-term average) in contrast to the winter of 2019 (2.0°C above average). Air temperature anomalies during spring were significantly lower (+1.1°C in 2019 and 0.0°C in 2020). Comparison of air temperatures across all spring months indicates higher temperatures in 2020 than in 2019. Only in May were the anomalies negative (-1.1°C in 2019 and -2.3°C in 2020), however this has not influenced emission rates because the heating season in Poland ends in March/April. Advection of air pollution was assessed on the basis of wind direction in Warsaw at 0.5 km derived from a 96-hour back-trajectory from the HYSPLIT model. Comparison of data from 2019 and 2020 in reference to long-term means shows some significant anomalies. For example, in April and in the beginning of May (2020) transport from NW and W dominated and advection from this direction usually brings clean Atlantic air masses to Central Europe. In contrast, during the same period in 2019 more frequent transport from S, SE and E was observed, however between 11 and 20 April and between 1 and 10 May (2019) the circulation changed to N.The HYSPLIT model revealed a variability of advection typical for Central Europe and significant differences between spring 2019 and 2020.
To conclude it has to be emphasised that COVID-19 lockdown improved air quality in Poland but the magnitude of this effect is hard to be dissociated from the unusual weather conditions related to frequent advections of clean warm atlantic air masses. This in particular affected aerosol properties such as PM2.5 concentrations and AOD whereas NO2 concentrations were affected by reduced transport and lower emissions from heating systems caused by the positive temperature anomalies. Ultimately, this study has demonstrated that the novel data source originating from the Sentinel-5P satellite provides a unique perspective on NO2 concentrations, which corresponds well with in-situ air quality measurements.
The recent World Health Organization’s global air quality guidelines provide new recommendations to reduce the levels of many air pollutants. For example, nitrogen dioxide (NO2) now has an annual recommended level four times smaller than the previous guideline from 2006. These recommendations apply to ground-level measurements which are available from air quality station networks. Satellites can complement surface measurements thanks to their global coverage, but their observations are not directly comparable to ground-level concentrations.
In this work we estimate ground-level NO2 concentrations from satellite-based TROPOMI (on-board the Sentinel-5 Precursor satellite) NO2 vertical column observations in Finland using the methods developed by Lamsal et al. (2008) and Cooper et al. (2020). These methods are based on calculating a surface-to-column ratio using the GEOS-Chem chemical transport model. The performance of the methods is evaluated by comparing their estimates to co-located in situ measurements.
We find that the method by Cooper et al. (2020) overall performs better than the method by Lamsal et al. (2008). When considering 2018–2019 averages for all Finnish air quality stations, the Cooper et al. (2020) method has a correlation of 0.69 and slope of 0.47 with respect to in situ measurements compared to 0.73 and 0.12, respectively, for the method by Lamsal et al. (2008). There is no clear dependence of the correlation on the type of air quality station (urban/suburban/rural), but both methods underestimate NO2 concentrations at highly urban stations in Helsinki. This is expected, as the concentration gradients at such narrow urban canyons have horizontal dimensions much smaller than the spatial resolution of TROPOMI.
The Cooper et al. (2020) method includes a linearly scaled correction factor for the vertical mixing of NO2 within the boundary layer. This scaling is applied based on the value of a satellite observation, and its upper and lower bounds are determined with a sensitivity test maximising agreement with in situ measurements. The values we obtained for these bounding values are significantly lower than what Cooper et al. (2020) obtained for the United States, and correspond to the lower NO2 concentrations measured in Finland.
Our results are in good agreement with surface-level in situ measurements, and show that the method by Cooper et al. (2020) is also applicable in Finland, characterised by its high latitudinal location and low average concentrations. This can be of interest to the Finnish Ministry of the Environment, which has already utilised satellite observations by incorporating them in its latest EU-level air quality assessment.
IDEAS-QA4EO DOAS-BO: TOWARDS A NEW FRM4DOAS SITE IN THE PO VALLEY
Paolo Pettinari1,3, Elisa Castelli1, Enzo Papandrea1, Massimo Valeri2, Paolo Cristofanelli1, Maurizio Busetto1, Cosimo Fratticioli1
1 Istituto di Scienze dell’Atmosfera e del Clima (ISAC-CNR), via Piero Gobetti, 101, 40129, Bologna
2 SERCO S.p.A., via Sciadonna 24, 00044, Frascati (Rome)
3 Dipartimento di Fisica e Astronomia, Università di Bologna
E-mail contacts: p.pettinari@isac.cnr.it, e.castelli@isac.cnr.it, e.papandrea@isac.cnr.it, massimo.valeri@serco.com, p.cristofanelli@isac.cnr.it, m.busetto@isac.cnr.it, c.fratticioli@isac.cnr.it
The Po valley (Italy) is one of the most polluted regions in Europe. High NO2 concentrations are often found due to industrial and urban activities and its particular geographical position.
Since information on this pollutant gas can be retrieved from ground-based visible spectra, exploiting the Differential Optical Absorption Spectroscopy (DOAS) technique, Multi-AXis (MAX-)DOAS instruments are located in the most polluted areas of Europe.
However, a similar instrument, compliant to the Fiducial Reference Measurements for Ground-Based DOAS (FRM4DOAS) standards is not yet present in the Po Valley.
The purpose of the IDEAS-QA4EO “DOAS-BO: Towards a new FRM4DOAS site in the Po valley" WPs was to fill this gap exploiting the ground-based instrument TROPOGAS. TROPOGAS is a custom-made spectrometer installed on the roof of the ISAC-CNR institute in Bologna, able to measure the diffuse solar radiation in the UV and visible spectral ranges at different elevation angles.
The objectives were to demonstrate the importance of the MAX-DOAS measurements in this polluted region, re-enforce the Italian know-how on the MAX-DOAS technique and go towards the provision of standardized data for validation networks.
First of all, since TROPOGAS is a custom-made spectrometer, we assessed its performance with respect to the FRM4DOAS requirements.
Initially, in the framework of the IDEAS-QA4EO WPs, TROPOGAS was supposed to be involved in two measurement campaigns: one at ISAC-CNR in Bologna for evaluating the synergies between DOAS ground-based, satellite and in-situ NO2 data and another one at the BAQUNIN supersite located at “La Sapienza” University in Rome for inter-calibrating TROPOGAS with other ground-based instruments, such as Pandora.
We performed the first measurement campaign at ISAC-CNR in Bologna during May 2021, focusing on the evaluation of the synergies between TROPOGAS, satellite and in-situ NO2 data.
In particular, NO2 vertical column densities (VCDs) retrieved from TROPOGAS zenith-sky measurements were compared with satellite data, while NO2 concentrations in the lower part of the atmosphere, derived from TROPOGAS MAX-DOAS measurements were compared with in-situ data.
In 2021, in the frame of an Italian national funded program, a new ground-based spectrometer (named SKYSPEC-2D), compliant with FRM4DOAS requirements, was bought by ISAC-CNR. Due to its higher standard performances, we decided to exploit it, in addition to TROPOGAS, in the frame of the IDEAS-QA4EO WPs.
An additional campaign was then organized to compare the two ISAC-CNR MAX-DOAS instrument performances. For this purpose, SKYSPEC-2D was located next to TROPOGAS at the ISAC-CNR roof in Bologna, acquiring spectra for the whole of August 2021. The comparison involved the NO2 VCDs, retrieved from zenith-sky spectra and the NO2 Slant Column Densities (SCDs) retrieved from SKYSPEC-2D and TROPOGAS MAX-DOAS measurements.
Once we understood the differences between SKYSPEC-2D and TROPOGAS measurements, we exploited the SKYSPEC-2D to perform the measurement campaign in the BAQUNIN supersite from 7th and 21st of September 2021. We used these data to investigate the agreement between the NO2 VCDs retrieved from SKYSPEC-2D and Pandora#117 measurements.
The outcome of these campaigns will be presented here.
Nowadays, the two MAX-DOAS instruments are measuring UV and visible diffuse solar spectra in the Po Valley: SKYSPEC-2D in the San Pietro Capofiume measurement site, located in the middle of the Po Valley and TROPOGAS at the ISAC-CNR roof in Bologna.
The aim of this study is to investigate to what extent spatiotemporal fluctuations in the tropospheric nitrogen dioxide (NO2) column density can map variations in economic output. To do so we analyzed satellite based tropospheric NO2 column measurements obtained from the ERS-2, ENVISAT, MetOp-A and MetOp-B satellite missions covering the extended period from 1996 to 2021 for three different areas of interest in Italy, Japan, and the US. Within our studies, a harmonic analysis is carried out in order to be able to exclude meteorological influences such as the annual or semi-annual cycle. Afterwards the residues of the tropospheric NO2 time series are further investigated through a wavelet analysis method. The results are different spectrograms which implies the NO2 variability for the study areas in the Po valley in Italy, Tokyo in Japan, and Los Angeles in the US between 1996 to 2021.We further use the gross domestic product (GDP) as a robust indicator for economic performance and thus, to a first approximation, of anthropogenically induced NO2 pollution.
In the case of Italy, the temporal development of the GDP growth rate fits strikingly well with a significant NO2 variability reduction between 2007 and 2014. During that time the global financial crisis in 2008 could be observed as well as a second crisis probably caused by a decrease in foreign investments. The economic growth decreased by 7.91% between 2007 to 2009 and by 5.23% between 2011 to 2014 in comparison to the same quarter of the previous year. It is found that during the crises (2007 to mid-2013) there is a significantly lower NO2 variability of about 84.95%.
Another very interesting study area is the metropolitan region Tokyo, where the wavelet spectrogram shows a massive reduction in NO2 variability starting in 2008. In comparison the GDP shows an economic decrease in 2008 due to the global financial crisis, but cannot explain the further sharp decline in NO2 variability. There is now reason to believe that Tokyo has managed to decouple its total energy consumption from economic growth due to strict air quality policies. A similar picture emerges from the example of Los Angeles. However, these two study areas are currently under investigation. Nevertheless, the NO2 variability based on the wavelet analysis seems to be a sensitive indicator to detect fluctuations in NO2 and would be great examples to discuss with the community at the ESA Living Planet Symposium 2022.
With a nearly continuously effusive eruption since 1983, the Kilauea volcano (Hawaii, USA) is one of the most active volcanoes in the world. At the beginning of May 2018, a sequence of eruptions on the Lower East Rift Zone (LERZ) caused an enhanced outbreak of volcanic gases and aerosols, releasing them into the troposphere. Since these gases and particles affect climate, environment, traffic, and health on regional to global scales, a continuos monitoring of the emission rates is essential.
As satellites provide the opportunity to observe and quantify the emissions remotely from space, their contribution to the monitoring of volcanoes is significant. The TROPOspheric Monitoring Instrument (TROPOMI) onboard the Sentinel-5 Precursor satellite was successfully launched by the end of 2017 and provides measurements with unprecedented level of details with a resolution of 3.5 x 7 km2. This also allows for an accurate retrieval of trace gas species such as volcanic SO2.
Here, it will be shown that the location and strength of SO2 emissions from Kilauea can be determined by the divergence of the temporal mean SO2 flux. This approach, which is based on the continuity equation, has been demonstrated to work for NOX emissions of individual power plants (Beirle et al., Sci. Adv., 2019).
The present state of our work indicates that emission maps of SO2 can be derived by the combination of satellite measurements and wind fields on high spatial resolution. As the divergence is highly sensitive on point sources like the erupting fissures in the 2018 Kilauea eruption, they can be localized very precisely. The obtained emission rates are slightly lower than the ones reported from ground-based measurements in other studies like the one from Kern et al. (Bull. Volcanol., 2020). The effects of suboptimal conditions like high cloud fractions on the method probably affect the derived emission rates and have to be further analyzed. For this reason, aerosol and cloud information from the VIIRS instrument onboard Suomi NPP and MODIS onboard the Aqua satellite are currently evaluated.
Tropospheric ozone (O3) is a secondary pollutant and a greenhouse gas with a positive radiative forcing effect, i.e. increasing global warming. High near-surface ozone levels negatively impact plant growth and human health, leading to numerous studies on O3 impacts, but mainly focusing on mid-and high-latitude regions. However, in the tropics, high O3 concentrations are also simulated, mainly due to high O3 precursor concentrations: carbon monoxide (CO) and volatile organic compounds (VOCs), associated with nitrogen oxides (NOx). The atmospheric precursors react with high irradiance triggering the chemical reactions that form O3. In Africa, the intense biomass burning (natural and anthropogenic fires) during the dry season plays a crucial role in ozone precursor production. However, ozone observational studies in tropical Africa are often limited in time, and it has not been possible to study the intra-interannual variability of O3. To fill this major research gap, continuous monitoring of near-surface ozone was established in the Congo Basin (i.e. Yangambi research centre, the Democratic Republic of the Congo), in the heart of the second-largest tropical forest, since November 2019. Using this novel dataset provided by this unique observational site in tropical Africa, we assess the ability of current remote sensing products to capture the magnitude and temporal dynamics of in situ O3 values, especially O3 variation between dry and wet seasons. We compare near-surface atmospheric ozone measurements collected in Yangambi and tropospheric O3 recorded by the Tropospheric Monitoring Instrument (TROPOMI) onboard the Copernicus Sentinel-5 Precursor satellite. In addition, we extend the study to validate the ozone product of two global reanalysis products for tropical Africa: ECMWF Reanalysis v5 (ERA5) and the Modern-Era Retrospective Analysis for Research and Applications version 2 (MERRA-2). The results show that both reanalysis (ERA5 and MERRA-2) and TROPOMI overestimate the magnitude of tropospheric O3 across the region. ERA5 reanalysis is the only product able to capture the observed variation between dry and wet seasons, showing higher O3 concentrations during dry season months, despite the inability to reproduce the daily cycle of O3. These results clearly demonstrate the need for more continuous and long-term in situ observations in tropical Africa (currently, only one site is recording O3 levels in the Congo Basin) in order to train global remote sensing products, towards a better prediction system, and a more accurate understanding of O3 patterns and impacts in tropical Africa.
Halogen radicals can drastically alter the atmospheric chemistry. In the polar regions, this is made evident by the ozone depletion in the stratosphere (ozone hole) but also by localized destruction of boundary layer ozone during polar springs. These recurrent episodes of catalytic ozone depletion are caused by enhanced concentrations of reactive bromine compounds. The proposed mechanism by which these compounds are released into the troposphere is called bromine explosion - reactive bromine is formed autocatalytically from the condensed phase.
The spatial resolution of S-5P/TROPOMI of up to 3.5 x 5.5 km² allows an improved localization and a more precise specification of these events compared to previous satellite missions. Together with the better than daily coverage over the polar regions, this allows for investigations of the spatiotemporal variability of enhanced BrO levels and their relation to different possible bromine sources and release mechanisms.
We present tropospheric BrO column densities retrieved from TROPOMI measurements using Differential Optical Absorption Spectroscopy (DOAS). The advantage of our retrieval is its independence from any external input data. We utilized a modified k-means clustering and methods from statistical data analysis to separate tropospheric and stratospheric partial columns, thereby relying only on NO2 and O3 columns measured by the same instrument. In a second step, the BrO slant column densities (SCDs) are converted into vertical column densities (VCDs) by using an air mass factor (AMF). These AMFs were derived using a look-up table (LUT) generated by the McArtim radiative transfer model. From this LUT the AMF is calculated for each pixel using measured O4 SCDs and reflectance data. In a last step, satellite pixels are differentiated by their sensitivity to the lower troposphere using the determined AMF. This allows the exclusion of measurements deemed not sensitive to the troposphere from the dataset and gives a high confidence in the remaining retrieved values.
Tropospheric BrO enhancements are examined through case studies, with particular emphasis on the relation to meteorological parameters and possible release mechanism. In addition, the spatiotemporal extent of events is studied and compared to simulations of a WRF-Chem model.
Glyoxal (CHOCHO) in the atmosphere originates from (1) the oxidation of natural and anthropogenic non-methane volatile organic compounds (NMVOCs) and (2) from direct emissions during fossil fuel and biofuel combustion as well as biomass burning. With a short lifetime, glyoxal tropospheric column measurements from space provide useful information on NMVOC emissions at the global scale.
The BIRA-IASB scientific glyoxal algorithm was recently updated and applied to TROPOMI as well as to the heritage instruments OMI and GOME-2A/B with a high level of consistency. The Geostationary Environmental Monitoring Spectrometer (GEMS), launched on-board the GEO-KOMPSAT-2B satellite in February 2020, covers the South-Eastern Asian region and is the first instrument in operation from a constellation of geostationary instruments dedicated to air quality. Those instruments will offer an unprecedented hourly revisit time in their respective spatial domains. In this work, we present first glyoxal tropospheric columns retrieved from GEMS measurements with the BIRA-IASB algorithm. Those columns are evaluated by means of comparisons with the GEMS operational glyoxal product and with the TROPOMI columns. In addition, comparisons with ground-based MAX-DOAS data are presented with a specific focus on the diurnal variation.
Emission of trace gases and aerosol from vegetation fires affect atmospheric composition and hence air quality. The modeling of the dynamical evolution of smoke plumes requires accurate description of emission strength and plume injection altitude, while during its chemical evolution a multitude of trace gases play a role, resulting for instance in ozone and secondary organic aerosol formation. Within smoke plumes the chemical regime is modified compared to the background due to enhanced NOx and aerosol levels causing increases in heterogeneous reaction rates, together with reductions in photolysis rates.
To represent these aspects in global atmospheric chemistry and transport models is challenging. Complexities in smoke plume chemistry, limitations in spatial model resolution, and uncertainty in modeling plume evolution result in uncertainties in the fate of key quantities such as CO, ozone, NO2 and aerosol loading.
Observations of trace gases and aerosol from the Sentinel-5p instrument enable an unprecedented evaluation and analysis of details that play a role regarding such smoke plumes, providing constraints on enhancements in trace gases (NO2, CO, HCHO) and aerosol properties, including absorbing aerosol index (AAI) and aerosol layer height (ALH).
In this contribution we show how the Sentinel-5p observations can help to constrain the modeling of fire trace gas and aerosol emissions, and its evolution, as investigated in the Sense4Fire project. For this we select a case study of large fires over Siberia during summer 2020, using the CAMS global chemistry and aerosol model together with the GFAS fire emissions. The modeled aerosol plume altitude is compared against retrievals of ALH, which allows to assess the performance of different injection height parameterizations. Furthermore, discrepancies between observed and modeled NO2 are analyzed, as well as the sensitivity to model resolution, model chemistry, and choices in observation filtering. As such this analysis helps to identify what are the key contributors to model uncertainty of smoke plumes.
In summary, this study provides a critical review of sensitivities that affect the model-observation biases, highly relevant when assessing, and optimizing, trace gas and aerosol emissions from fires using S-5p observations.
The strong economic development in China, which started by the end of the previous century, and the associated urbanization and socio-economic development, have led to the increase in the emissions of aerosols and trace gases, including aerosol pre-cursors such as SO2, NO2 and volatile organic compounds (VOCs). These increases were clearly observed in satellite observations of the aerosol optical density (AOD) (e.g. de Leeuw et al., 2018; Sogacheva et al. 2018), using data from the Along track Scanning Radiometer (ATSR), the Advanced ATSR (AATSR) and the MODerate resolution Imaging Spectroradiometer (MODIS), and the emissions derived from total column vertical densities (TVCDs) of SO2 and NO2 (e.g., Van der A et al. 2017), using data from the Ozone Monitoring Instrument (OMI). The decline of the SO2 emissions was observed in the data from 2007 and for the NO2 emissions from 2012, in response to emission reductions following the implementation of a series of clean air action plans (Van de A et al., 2017). Likewise, the AOD was observed to fluctuate between 2007 and 2011 and decreasing tendencies were calculated by Sogacheva et al. (2018). However, recent studies, using OMI and TROPOMI data over 11 selected areas across south-eastern China, show that the downward trends in NO2 tropospheric vertical column densities flattened in recent years, with different behavior in the north and south of the study area, i.e. north or south of the Yangtze River (Fan et al., 2021). Considering that NO2 is a pre-cursor for secondary formation of aerosols, the observation of the flattening of the NO2 trend inspired us to investigate the behavior of the AOD for two areas, around Shanghai and Zhengzhou (de Leeuw et al., 2021a). This study was further expanded across south-eastern China by de Leeuw et al. (2021b), including 21 provinces, where AOD was averaged per province. Preliminary results of this study show the overall increase of the AOD until 2007, with anomalies over certain regions. However, after 2007 the AOD fluctuated until 2014, in response to emission reduction measures implemented during the 11th and 12th 5-year plans, the world wide crisis in 2008 and increases in response to economic development thereafter, as well as climatic and economic factors. Hence, the decreasing trends, starting from either 2007 or 2011 as reported in several studies on AOD trends over south-eastern China, are not valid for application on provincial scale. Only after 2014, in response to the implementation of the Clean Air Action Plan (2013-2017), the AOD decreased significantly, until about 2018. Likewise, also PM2.5 concentrations decreased significantly between 2013 and 2018, as evident from several studies. However, like the NO2 TVCDs, the AOD trends flattened and values during the period 2017-2020 fluctuated within about 10% of the average AOD during these years. This average value was close to the AOD in 2000, smaller over central China and somewhat higher over Guangxi, Guangdong, Jiangxi and part of Shandong. The reasons for this flattening may be anthropogenic, i.e. changes in emissions, or meteorological, i.e. conditions resulting in the increase of aerosol concentrations. To investigate the reasons for the flattening of the AOD, PM2.5 and NO2 trends, model simulations are made to discriminate between natural and anthropogenic influences following methods developed in Kang et al. (2018). The study is made for different highly populated and industrialized areas in south-eastern China. Detailed results will presented.
References
de Leeuw, G., Sogacheva, L., Rodriguez, E., Kourtidis, K., Georgoulias, A. K., Alexandri, G., Amiridis, V., Proestakis, E., Marinou, E., Xue, Y., and van der A, R.: Two decades of satellite observations of AOD over mainland China using ATSR-2, AATSR and MODIS/Terra: data set evaluation and large-scale patterns, Atmos. Chem. Phys., 18, 1573-1592, https://doi.org/10.5194/acp-18-1573-2018, 2018.
de Leeuw, G., R. van der A, J. Bai, Y. Xue, C. Varotsos, Z. Li, C. Fan, X. Chen, I. Christodoulakis, J. Ding, X. Hou, G. Kouremadas, D. Li, J. Wang, M. Zara, K. Zhang, Y. Zhang (2021). Air Quality over China. Remote Sens. 13, 3542. https://doi.org/ 10.3390/rs13173542, 2021a.
de Leeuw, G, Fan, C, and Li, Z.: AOD trends over China during the period from 2000 to 2021, ATMOS 2021, 22-26 November, 2021, oral presentation, 2021b.
Fan, C., Li, Z., Li, Y., Dong, J., van der A, R., and de Leeuw, G.: Variability of NO2 concentrations over China and effect on air quality derived from satellite and ground-based observations, Atmos. Chem. Phys., 21, 7723–7748, https://doi.org/10.5194/acp-21-7723-2021, 2021.
Kang, H., B. Zhu, R.J. van der A, C. Zhu, G. de Leeuw, X. Hou, J. Gao (2018). Natural and anthropogenic contributions to long-term variations of SO2, NO2, CO, and AOD over East China. Atmospheric Research 215 (2019) 284–293; https://doi.org/10.1016/j.atmosres.2018.09.012.
Satellite-derived NO2 products have been used for emission estimates for over a decade. The increased signal to noise ratio and higher spatial resolution of the TROPOspheric Monitoring Instrument (TROPOMI) instrument on the Sentinel-5 Precursor (S-5P) satellite compared to earlier satellites, makes observations of emissions from cities and large power plants possible. Individual power plant plumes can be seen in single satellite overpasses. TROPOMI-based emission estimates have the potential to allow an independent comparison with as well as a QA/QC evaluation process of the emissions reported under European Pollutant Release and Transfer Register / Large Combustion Plants (E-PRTR / LCP). The general conditions needed for the successful top-down emission estimates are well known. Nevertheless, some aspects, including simple questions such as for how many power plants in Europe reliable annual NOx emission can be calculated from TROPOMI, are not finally answered.
Here, we use re-processed, bias-corrected Sentinel-5P NO2 data (Level-2) from 2018 to 2021 (version 02, when available) to estimate NOx emission for many LPCs throughout Europe. The selected sites cover different environmental conditions (high to low regional background NO2) and average emissions (medium to very large). Additional data from the European Centre for Medium-Range Weather Forecasts (ECMWF) fifth generation of ECMWF atmospheric reanalyses (ERA5) are used for meteorological information and ozone, and for the conversion of NO2 to NOx. Sensitivity studies will be presented to estimate the effect of 1) the inclusion of the cloud-fraction in the calculation of the NO2 photolysis rate and 2) the adaptive choice of pressure level, for which meteorological data are included in the analysis based on the actual plume direction derived from image processing. NOx lifetime and NOx emission estimates are calculated, based on the rotation of the NOx column density for the best fitted model wind direction and the fit of an Exponentially Modified Gaussian (EMG).
We estimate which improvement can be made by using image processing routines that allow the estimation of line-densities from curved plumes for individual overpasses and a better elimination scheme for interference from NO2 sources in close-proximity to the evaluated power plant. Finally, statistics for a simple proxy, i.e., enhanced/background NO2, will be given, which provides additional information of LCP NO2 emission activities at the respective plants over the course of the time-period studied.
In order to ensure that products delivered by air quality satellite sensors meet user requirements in terms of accuracy, precision and fitness for purpose, it is essential to develop a robust validation strategy relying on well-established and traceable reference measurements. In this context, the ESA Fiducial Reference Measurements for Ground-Based DOAS Air-Quality Observations (FRM4DOAS) activity is aiming at further harmonizing Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) measurements. Since it provides vertically-resolved information on atmospheric gases at a horizontal scale approaching the one from nadir backscatter satellite sensors, the ground-based MAX-DOAS technique has been recognized as a valuable source of correlative data for validating space-borne observations of atmospheric species such as NO2, HCHO, SO2, O3, etc.
Here we present the status of the first near-real-time (24h latency) central processing service for MAX-DOAS instruments that has been developed in the framework of the FRM4DOAS activity. It includes state-of-the-art retrieval algorithms and is operated as part of the Network for the Detection of Atmospheric Composition Change (NDACC). Since November 2020, it delivers tropospheric NO2 vertical profile and total O3 column data from about 15 stations on a daily basis, both to the NDACC Rapid Delivery and ESA Validation Data Centre (EVDC) databases. The main aspects of the service will be presented, with focus on the status of all the baseline FRM4DOAS products (the two products above + those which are currently under consolidation, i.e. tropospheric HCHO and stratospheric NO2 vertical profiles). The first results of the new FRM4DOAS-2.0 R&D follow-up project, which started in September 2021 with the main objective to develop new additional MAX-DOAS products, will be also discussed.
The NDACC MAX-DOAS central processing service and its future upscaling in terms of stations and data products will ensure that MAX-DOAS observations with a FRM quality level will be made available for the validation of present and future satellite missions like the Copernicus atmospheric Sentinels (5p, 4, 5).
The introduction of more stringent NOx emission regulations for ships on open seas calls for a better monitoring of said emissions. We find that TROPOMI with its superior spatial resolution is capable to detect enhanced NO2 over several shipping lanes not visible with other satellite instruments. To optimize detection capabilities of TROPOMI, we systematically study the effects of sun glint on trace gas retrieval. We show that columns retrieved over sun glint are reliable and that these scenes have an enhanced sensitivity to low level NO2 with a 60% increase in averaging kernels. The benefits of sun glint are most prominent under clear-sky situations when sea surface winds are low, but slightly above zero (~2 m/s). Furthermore, we examine the new FRESCO+wide cloud algorithm used in TROPOMI retrievals in v1.4 and v2 and find that original FRESCO+ cloud pressures are biased high by around 50 hPa, which partly explains the low bias in the official TROPOMI NO2 product for (partially) cloudy scenes. The new FRESCO+wide cloud pressures agree better with independent data from VIIRS, and are used in the official TROPOMI NO2 product since December 2020. In anticipation of the reprocessing, we train an artificial neural network for a NO2 column correction of historic NO2 columns to create a consistent TROPOMI dataset for the period 2019-2020. We apply this data set and show NO2 reductions of 25% over major shipping lanes in Europe during the COVID-19 pandemic, and connect these reductions to changes in the shipping activity observed from AIS data. To further improve and validate TROPOMI NO2 retrievals over sea and make them usable for emission monitoring, we evaluate new aircraft measurements taken on 3 summer days in 2021 of the NOx family over a busy shipping lane in the North Sea. We find that shipping NO2 is confined mostly to the lowest 200m above the sea surface, and that across-plume sizes are significantly smaller than the TROPOMI pixel size. This is in line with results from the plume model PARANOX and should be taken into account for emission monitoring. A comparison of 10 individual aircraft-based vertical NO2 columns to collocated TROPOMI columns over the polluted North Sea shows good correlation between the aircraft and TROPOMI NO2 columns.
The Arctic is experiencing rapid climate change. The increasing temperature not only reduces the sea-ice extent but will also have doubled the number of lightning flashes by the end of the century. The unlocked Arctic ocean can also lead to increased human activities such as shipping and expanded oil and gas production. In addition, the increase of lightning will cause more wildfires. All of these above will give rise to emissions of nitrogen oxides (NOx).
In this study, we track and estimate three-year (2019-2021) Arctic NOx emissions by combing the TROPOspheric Monitoring Instrument (TROPOMI) observations, Visible Infrared Imaging Radiometer Suite (VIIRS) data, and the Vaisala’s Global Lightning Dataset (GLD360). We divide NOx emissions into two different categories and estimate them separately: 1) NOx emissions from lighting; 2) surface NOx emissions from all other sources.
Firstly, we have calculated NOx emissions from lighting. The continuous overlapping orbits of TROPOMI passing over the Arctic provide unique opportunities for tracking the lightning NOx (LNOx) and calculating both LNOx lifetime and production efficiency. Previous studies focused on the LNOx emissions in the tropical and mid-latitude regions and estimated the global LNOx within the range of 2 to 8 T N yr-1. This study can add the missing LNOx productions in high latitudes.
Specifically, an algorithm for auto-detection of the LNOx region is developed based on water-shedding techniques from the NO2 field of TROPOMI observations. The LNOx region should overlap with the GLD360 lightning but not with VIIRS wildfire data. This ensures that the tropospheric lightning nitrogen dioxide (LNO2) column can be determined by subtracting the background NO2 column from the detected NO2 column. Then, the LNO2 production efficiency is calculated by multiplying the LNO2 column by the storm area and dividing by the number of related GLD360 flashes. The total LNOx emissions can be derived from the representative ratio of NO2 to NOx and the LNO2 emissions.
Finally, we estimate surface NOx emissions based on TROPOMI observations of NO2. A Cloud-Snow Differentiation (CSD) method is applied to get more high-precision TROPOMI observations over large boreal snow-covered areas by discriminating snow-covered surfaces from clouds. The derived NOx emissions from power plants, natural gas industries, and soil will play an important role in updating the present-day NOx inventories which have a limited number of data sets. This study highlights the potential of TROPOMI as well as future satellite missions for monitoring Arctic NOx emissions.
Tropospheric ozone is a key trace gas in the Earth’s atmosphere since it plays a central role in determining the oxidizing capacity of the atmosphere and air quality at local, regional and global scales. The ozone precursors are emitted from different sources: anthropogenic, biogenic and biomass burning. In addition, since the complex photochemistry is involved, the amount of ozone formed responds nonlinearly to changes in precursor emissions and is sensitive to variations in air temperature and other climate factors. It is important for policy and decision making to better observe and understand the response of ozone to change of emissions. In-situ techniques provide accurate measurements typically at hourly and weekly time resolution respectively at the surface and vertical profiles from balloon soundings, but with limited spatial coverage. Satellite observations offer a great potential for filling this spatial observational gap. Recently, a new multispectral approach called IASI+GOME2, combining IASI observations in the infrared (IR) and GOME-2 observations in the ultraviolet (UV), allowed the observation of the daily horizontal distribution of ozone in the lowermost troposphere defined as the atmospheric layer between the surface and 3 km above sea level (Cuesta et al., 2013; 2018).
In this study, we examine the impact of different sources of ozone precursors on ozone pollution and the daily evolution of a major ozone outbreak over Europe in July 2017 by using the multispectral satellite approach IASI+GOME2, a tropospheric chemical reanalysis (TCR-2, Miyazaki et al., 2020) and in situ measurements. The satellite-based approach has shown air-quality-relevant skills to unveil the daily evolution of the lowermost tropospheric ozone plumes with various sources across Europe. The ozone outbreak initiated over the Iberian Peninsula in the middle of July because of temperature-induced increases in biogenic volatile organic compound emissions and collocated large nitrogen oxides emissions under air stagnation condition. Then, the ozone plume was transported eastwards and mixed with biogenic and biomass burning emissions over Italy and the Balkan Peninsula. In addition, ozone plumes produced in western Europe and transported from North America observed in western Europe. These plumes had a great impact of anthropogenic emissions during passing over the large anthropogenic emission region. Finally, eastward winds carry these ozone plumes over the north coast of the Black Sea where photochemical production of ozone is strongly enhanced once again by precursor emissions from agricultural burning after harvesting.
In the atmosphere, glyoxal (CHOCHO) is the simplest and one of the most abundant α-dicarbonyl compound. It results from the oxidation of other non-methane volatile organic compounds (NMVOCs) which originate from natural and anthropogenic sources. In the lower atmosphere, it has a rather short lifetime against photooxidation of about 1 to 3 hours. Due to its high solubility, it quickly partitions into cloud droplets and deliquescent aerosols where it is known to form oligomers leading to secondary organic aerosols (SOA). In order to assess its contribution to air pollution, it is thus necessary that global atmospheric chemistry models properly represent its abundance.
The TROPOspheric Monitoring Instrument (TROPOMI) on board of the Sentinel-5 Precursor satellite provides tropospheric glyoxal columns. These tropospheric columns are generated with an improved version of the BIRA-IASB scientific retrieval algorithm relying on the Differential Optical Absorption Spectroscopy (DOAS) approach. By combining these retrievals with glyoxal measurements obtained during multiple air-borne campaigns using the High Altitude and Long Range Research Aircraft (HALO), we evaluate the capabilities of the ECHAM/MESSy Atmospheric Chemistry (EMAC) model to represent the global glyoxal abundance. When performing simulation using the Mainz Organic Mechanism (MOM) to represent gas phase chemistry and the standard emissions setup, we find that EMAC tends to overestimate tropospheric glyoxal columns over continental regions close to strong natural and anthropogenic emission sources (e.g., Amazon Basin, China). At the same time, EMAC tends to underestimate glyoxal columns over tropical oceanic regions.
Here, we perform a series of sensitivity simulations and demonstrate that the modelling bias over continental regions is mainly resolved by reducing biogenic emissions towards the latest estimates and by including detailed aqueous phase chemistry from the Jülich Aqueous-phase Mechanism of Organic Chemistry (JAMOC). By implementing additional glyoxal precursors from oceanic sources, the model bias over the ocean is reduced. Following the more realistic representation of tropospheric glyoxal columns in EMAC, we present a revised tropospheric glyoxal budget.
Uncertainties in reactive nitrogen in the upper troposphere (UT; about 8-12 km altitude) impart errors in retrieval of column abundances of NO2 from space-based instruments used to constrain surface air quality. Poorly constrained nitrogen oxide (NOx = NO + NO2) levels in the UT also hinder our understanding of tropospheric ozone in the portion of the atmosphere where the warming effect of this greenhouse gas is most potent. Observations to address these issues are limited to sporadic in situ instruments on research and commercial aircraft susceptible to large biases in the UT. Here we assess our best understanding of reactive nitrogen in the UT from the GEOS-Chem model against a new set of near-global seasonal mean observations of UT NO2 obtained by applying the so-called cloud-slicing technique to total column abundances of NO2 from the TROPOMI instrument. We find that the model underestimates UT NO2 by about 50 % on average and by 60-70 % over the remote ocean. We attribute this large underestimate to lightning emissions of NO, due to limited sensitivity of simulated NO2 to reported errors in reactive nitrogen kinetics, the overwhelming dominance of lightning in the UT, and the rigid and heavily parameterized representation of lightning emissions in models. Of our kinetic studies, UT NO2 shows greatest sensitivity to the rate of O3 titration by NO, addressing up to 20 % of the underestimate of TROPOMI observations by the model. We update the model lightning NOx parameterization following increasing evidence of both tighter constraints on lightning NO production efficiency per flash, and a mismatch between modelled and observed lightning flash and intensity diurnal cycles. We test the hypothesis that NO emissions are more influenced by lightning intensity than flash count. Compared to the current parameterization, this places more weight on lightning NO emissions from the morning and from over the ocean. Results from this update are preliminary, as the increase in lightning emissions in the morning reduces chemical loss of NO2 via photolysis.
Synergistic hyperspectral ground-based and sunphotometer inversion of measurements for the retrieval of gas concentration and aerosol properties using GRASP
Marcos Herreras-Giralda(1,2), Oleg Dubovik(1), David Fuertes(2), Pavel Litvinov(2), Tatyana Lapyonok(1), Anton Lopatin(2), Yevgeny Derimian(1), Rene Preusker(3), Jurgen Fisher(3)
(1) Laboratoire d’Optique Atmospherique, CNRS/University of Lille, Lille, France
(2)) GRASP SAS, Lille, France
(3) Institute for Space Sciences, Free University of Berlin, Berlin, Germany
Abstract
Recent developments in GRASP (Generalized Retrieval of Atmosphere and Surface Properties) (Dubovik et al., 2021) code have extended the capabilities of the algorithm to enable the inclusion of hyperspectral measurements as a possible input of the algorithm. Hitherto, the gaseous absorption in GRASP was limited to simple column integrated values for the whole channel width. Notwithstanding, from now on it is possible to account for precise gaseous vertical absorption profiles, which can be integrated following line-by-line (Doppler et al., 2014) or K-distribution methodologies (Doppler et al., 2013).
This work presents an example framework of possible usage of hyperspectral ground-based measurements in GRASP. The combination of hyperspectral direct measurements of instruments as PSR (Grossner and Kouremeti, 2019), Pandora (Tzortziou et al., 2012) or QASUME (Bais et al., 2003) with the standard AERONET (Aerosol Robotic NETwork; Holben et al., 1998) measurements enables the combined retrieval of already standardized aerosol optical and microphysical properties (Particle Size Distribution, SSA, refractive index, etc.) and the column concentration of gaseous species. The synthetic results presented here are focused on the inversion of hyperspectral measurements between 400 to 440 nm, with the spectral width varying from 0.1 to 1.2 nm. It is shown that even in realistic noise conditions, the combination of 50 or more channels within the already mention spectral range with the direct and almucantar measurements of the AERONET sunphotometer enables deriving total column NO2 in addition to the aerosol properties. Satisfying results have been obtained for the complete climatological range of NO2 concentration (from 0.01 to 2.0 DU) under any aerosol scenario.
Future steps are dedicated to inclusion of hyperspectral measurements in other spectral ranges aiming retrieval of O2, CO2 or Water vapor among others.
References
Bais, A. F., Blumthaler, M., Gröbner, J., Seckmeyer, G., Webb, A. R., Gorts, P., ... & Wuttke, S. (2003, July). Quality assurance of spectral ultraviolet measurements in Europe through the development of a transportable unit (QASUME). In Ultraviolet Ground-and Space-Based Measurements, Models, and Effects II (Vol. 4896, pp. 232-238). International Society for Optics and Photonics.
Doppler, L., Carbajal-Henken, C., Pelon, J., Ravetta, F., & Fischer, J. (2014). Extension of radiative transfer code MOMO, matrix-operator model to the thermal infrared–Clear air validation by comparison to RTTOV and application to CALIPSO-IIR. Journal of Quantitative Spectroscopy and Radiative Transfer, 144, 49-67.
Doppler, L., Preusker, R., Bennartz, R., & Fischer, J. (2014). k-bin and k-IR: k-distribution methods without correlation approximation for non-fixed instrument response function and extension to the thermal infrared—Applications to satellite remote sensing. Journal of Quantitative Spectroscopy and Radiative Transfer, 133, 382-395.
Dubovik, O., Fuertes, D., Litvinov, P., Lopatin, A., Lapyonok, T., Doubovik, I., ... & Federspiel, C. (2021). A Comprehensive Description of Multi-Term LSM for Applying Multiple a Priori Constraints in Problems of Atmospheric Remote Sensing: GRASP Algorithm, Concept, and Applications. Front. Remote Sens. 2: 706851. doi: 10.3389/frsen.
Gröbner, J., & Kouremeti, N. (2019). The Precision Solar Spectroradiometer (PSR) for direct solar irradiance measurements. Solar Energy, 185, 199-210.
Holben, B. N., Eck, T. F., Slutsker, I. A., Tanre, D., Buis, J. P., Setzer, A., ... & Smirnov, A. (1998). AERONET—A federated instrument network and data archive for aerosol characterization. Remote sensing of environment, 66(1), 1-16.
Tzortziou, M., Herman, J. R., Cede, A., & Abuhassan, N. (2012). High precision, absolute total column ozone measurements from the Pandora spectrometer system: Comparisons with data from a Brewer double monochromator and Aura OMI. Journal of Geophysical Research: Atmospheres, 117(D16).
The main objective of DLR’s InPULS project is to develop new user-friendly and innovative Level 2, 3 and 4 products based on Copernicus Sentinel atmospheric data. DLR leads processing, archiving and provision of S5P L1 and L2 data within ESA’s PDGS and MPC projects. InPULS will especially support rapid reprocessing of atmospheric data and provide fast access methods for large data volumes (data cubes) as yielded by the Sentinel missions S5P, and later S4 and S5. Final products will be available via OGC geo-services. Air-pollutants are detrimental to human health. They are legislated by EU and national law. Though continuous efforts to reduce pollution by human activities limit values are still violated, especially in major urbanizations due to road traffic. Therefore, continuous monitoring with high spatial resolution is urgently needed. S5P-TropOMI provides at least important information on the tropospheric NO2 distribution on the urban and regional scale. Combined with chemical modelling realistic area-wide surface concentrations can be derived, and forecasted for several days. The service will apply the European Copernicus Atmospheric Monitoring Service (CAMS) large-scale background data and deliver analyses and forecasts of main pollutants on a 1km grid covering Germany and neighbor countries. We describe the data assimilation approach followed and the implementation strategy for the operational processing environment.
Ammonia (NH3) is an atmospheric pollutant mainly emitted by the agricultural sector, which has a serious impact on the environment by eutrophication and acidification of soil and water. It is also a precursor of PM2.5 and therefore has a major effect on public health, and climate change.
The difficulty of measuring NH3 in ambient air, in combination with the very large variations of NH3 concentrations in space and time, induces a current lack of representative observations in the atmosphere. As a contribution to address this issue, a mini-DOAS instrument has been installed over Paris city-center in the QUALAIR facility (35m above ground level) for 21 months since December 2019 and deployed in a rural landscape (Grignon, France) for 2 months (September-October 2021). The mini-DOAS is a state-of-the-art ground-based open-path instrument based on the Differential Optical Absorption Spectroscopy (DOAS) technique in the UV-Visible, enable monitoring hourly NH3 concentrations with high precision and accuracy.
NH3 is one of the key molecules in the IASI portfolio of atmospheric chemistry products, with both a near-real time processing available and a climate record based on ERA5 temperatures. This study aims at comparing NH3 concentration measured by IASI with those derived from the mini-DOAS instrument at two different sites (urban and rural). Although they have different sensitivity and horizontal coverage, IASI and the mini-DOAS NH3 concentrations reveal similar temporal variabilities over the two sites. In Paris, the overall agreement is relatively good (R = 0.69) and seasonally dependant. In spring, the mini DOAS and IASI show the best agreement (R = 0.72) when both instruments monitor NH3 concentrations coming from the northeast. Comparison of pollution roses from IASI and mini-DOAS observations in Paris, confirms that the mini-DOAS observations footprint is at the scale of the Parisian region. At the rural site of Grignon, NH3 concentration is found to be more than 2 times higher than in the urban region of Paris. Local emissions from agricultural sources (farm) at this site is likely to drive the observed NH3 variability.
The mini-DOAS dataset will be used in the future to contribute further to the validation of the IASI NH3 product over relevant sources.
While current satellite observations of the atmospheric composition such as those developed in the Copernicus Sentinel program (Sentinel-5 Precursor and the future Sentinel-4 and -5 missions) are well suited to map air pollutants at global and regional scales, they have limited capabilities to provide information at the urban scale, due to their relatively low spatial and temporal resolutions. In contrast, airborne observations do provide valuable air quality information at high resolution (< 100 m), however their deployment is complex and expensive and, therefore, restricted to campaigns providing only snapshots in time. As a result, the monitoring of air quality at local scales is currently based on well-established telemetric in-situ networks, which provide accurate continuous observations of relevant air pollutants but have limited representativeness due to the relatively small number of measurement sites.
As part of a recently awarded ESA project, we propose to build a mountaintop cubesat demonstrator overlooking the city of Innsbruck. In combination with local in-situ measurements the proposed system will allow to monitor the air-quality in and around Innsbruck at both high temporal and spatial resolutions. The instrument is based on a compact two-dimensional imaging grating spectrometer to be installed on top of the Hafelekar observatory at an altitude of 2334m a.s.l. From there, it will monitor solar radiances reflected from the city and surroundings in the spectral range from 270 to 520 nm at a geometric resolution of a few meters. The primary target gas is tropospheric NO2, but the instrument will also be sensitive to other species like HCHO, SO2 and O3. The overall experiment is meant to be a demonstrator for a cubesat mission that would allow to monitor tropospheric NO2 columns over a ‘target’ field of regard of approximately 50x50 km2 (hence compatible with the scale of typical cities) at a ground-resolution < 100 m. The poster presents the concept of the experiment and details of the intended instrumental design.
Sentinel-5 precursor (S-5P), launched on 13 October 2017, is the first mission of the Copernicus Programme dedicated to the monitoring of air quality, climate, ozone and UV radiation. The S-5P characteristics, such as the fine spatial resolution, introduce many new opportunities and challenges, requiring to carefully assess the quality and validity of the generated data products by comparison with independent measurements and analyses.
While routine validation is performed within the ESA Mission Performance Center (MPC) based on a limited number of Fiducial Reference Measurements (FRM), additional validation activities including aerial and ground-based campaigns are conducted as part of the S-5P Validation Team (S5PVT). The validation activities bring together various teams and instruments to address specific validation requirements and provide a more in-depth, complete insight into the S-5P instrument performance and the fitness for purpose of its data products. The acquired reference data sets allow to address product accuracy and precision, spatial and temporal validation requirements, algorithm parameters (a priori profiles, albedo, etc.) and specific requirements, such as validation of strongly polluted and heterogeneous scenes.
We present a series of decentralized activities that took place in 2021 and continue in 2022 (s5pcampaigns.aeronomie.be), which have been identified to address key priorities for S5-P validation. Cal/val strategies are being developed in this framework that will be suitable as well for future atmospheric missions such as Sentinel-5 and Sentinel-4. After providing an overview of the different campaigns and discussing validation strategies, we will focus on an airborne mapping campaign that was conducted over key cities in Belgium to map tropospheric NO2.
The TROPOMI tropospheric NO2 level2 product (OFFL v1.03.01; 3.5 km × 7 km at nadir observations) has been validated over strongly polluted urban regions in Belgium by comparison with coincident high-resolution Airborne Prism EXperiment (APEX) remote sensing observations (∼75 m × 120 m). In the framework of the S-5P validation campaign over Belgium (S5PVAL-BE), the APEX imaging spectrometer has been deployed during four mapping flights (26–29 June 2019) over Brussels and the harbour and city of Antwerp, in order to map the horizontal distribution of tropospheric NO2. For each flight, 10 to 20 TROPOMI pixels were fully covered by approximately 2700 to 4000 APEX measurements within each TROPOMI pixel. The TROPOMI and APEX NO2 vertical column density (VCD) retrieval schemes are similar in concept. Overall, for the ensemble of the four flights, the standard TROPOMI NO2 VCD product is well correlated (R = 0.92) but biased negatively by −1.2 ± 1.2 × 1015 molec cm−2 or −14% ± 12%, on average, with respect to coincident APEX NO2 retrievals. When replacing the coarse 1° TM5 a priori NO2 profiles by NO2 profile shapes from the Copernicus Atmospheric Monitoring Service (CAMS) regional chemistry transport model (CTM) ensemble at 0.1°, R is 0.94 and the slope increases from 0.82 to 0.93. The bias is reduced to −0.1 ± 1.0 × 1015 molec cm−2 or −1.0% ± 12%. The absolute difference is on average 1.3 × 1015 molec cm−2 (16%) and 0.7 × 1015 molec cm−2 (9%), when comparing APEX NO2 VCDs with TM5-MP-based and CAMS-based NO2 VCDs, respectively. Both sets of retrievals are well within the mission accuracy requirement of a maximum bias of 25%–50% for the TROPOMI tropospheric NO2 product for all individual compared pixels.
Additionally, the APEX data set allows the study of TROPOMI subpixel variability and impact of signal smoothing due to its finite satellite pixel size, typically coarser than fine-scale gradients in the urban NO2 field. For a case study in the Antwerp region, the current TROPOMI data underestimate localized enhancements and overestimate background values by approximately 1–2 × 1015 molec cm−2 (10%–20%).
The study demonstrates that the urban/industrial NO2 distribution, and its fine scale variability, can be mapped accurately based on airborne mapping observations. It provides a unique data set for air quality studies, as well as a set of reference data for validation of satellite data quality and quantification of the retrieval uncertainties.
Owing to its unprecedented spatial resolution, the Sentinel-5P (S5P) NO2 data product is widely used in air quality (AQ) applications, such as the detection of pollution sources, the quantification of emissions, the evaluation of AQ policies, and the assessment of NO2 concentration reductions resulting from COVID-19 related lockdown measures.
In parallel to operational data production, validation activities are carried out regularly to detect and quantify limitations in NO2 data quality, after which algorithm improvements are developed and implemented in the operational NO2 data processor and in supporting prototype algorithms and non-operational data processors. Validation studies are essential both in highlighting shortcomings and in the verification of the expected quality improvement from one data version to another. Among the issues or limitations having affected Sentinel-5P NO2 data record one may cite:
(i) A low bias, mostly multiplicative w.r.t. the actual NO2 tropospheric column amount, affecting the operational product up to processor version 1.3 (i.e., up to and including November 2020). As a consequence, NO2 changes (e.g., as reported in COVID-19 lockdown papers) within this period are expected to be reliable when expressed in relative terms (percentage) but not in absolute terms (columnar amount).
(ii) A bias in tropospheric NO2 resulting from the coarse spatial resolution (1 degree x 1 degree) of the TM5-MP modelled vertical NO2 profiles which is input to the operational data processing. This bias has reduced in a scientific data product for Europe where high-resolution (0.1 degree x 0.1 degree) profiles from the CAMS regional model ensemble are used (S5P-CAMS).
(iii) The discontinuity in S5P NO2 column values between successive versions, especially between processor versions 1.3 and 1.4. The resulting bias between the two parts of the S5P data record is a major obstacle for trend analysis. A full reprocessing of the S5P data record to obtain a consistent single-version data record is scheduled in mid-2022, however, an intermediate reprocessing covering the period April 2018 to September 2021 was performed end 2021.
To highlight and document the evolution of the NO2 data processing and resulting data record, we validate both the original operational and revised data products (reprocessed S5P and S5P-CAMS) within the framework of the Validation Data Analysis Facility (S5P VDAF) of the Sentinel-5p Mission Performance Centre. Ground-based measurements have been collected from Multi-Axis and Zenith-sky Differential Optical Absorption Spectroscopy instruments (MAX-DOAS and ZSL-DOAS) from the Network for the Detection of Atmospheric Composition Change (NDACC) and from the Pandonia Global Network (PGN) of direct Sun DOAS instruments. We also compare Sentinel-5p data with tropospheric NO2 columns derived from the CAMS regional runs.
In a next step, we provide examples of how the improved data products change air quality assessments making use of Sentinel-5p data. A first example comes from the Sentinel-based air quality monitoring service developed in the context of the Belgian federal research project LEGO-BEL-AQ, in which TROPOMI-based NO2 maps over Belgium are downscaled at 1-km resolution. We compare these maps produced with the operational processor with those based on the revised S5P NO2 products. A second example consists of estimates of the NO2 concentration reductions resulting from COVID-19 related lockdown measures, initially calculated with the operational data product version 1.3 and that we compare now with reduction estimates recalculated with improved data sets, both in relative and absolute terms.
Cloud shadows are observed by TROPOMI as a result of its high spatial resolution as compared to its predecessors. These shadows contaminate TROPOMI's air quality measurements, because they are generally not taken into account in the models that are used for the retrieval of aerosols and trace gas concentrations.
For the removal of cloud shadow effects from TROPOMI data, and for the analysis or future corrections of cloud shadow effects, we recently developed the cloud shadow detection algorithm DARCLOS, which is as far as we know the first cloud shadow detection algorithm made for a spectrometer. DARCLOS raises potential cloud shadow flags (PCSFs), and actual cloud shadow flags (ACSFs). The PCSFs indicate the TROPOMI ground pixels that are potentially affected by cloud shadows based on a geometric consideration with safety margins. The ACSFs are a refinement of the PCSFs using spectral reflectance information of the PCSF pixels, and indicate the TROPOMI ground pixels that are confidently affected by cloud shadows. In addition, DARCLOS outputs the spectral cloud shadow flag (SCSF), which is a wavelength dependent cloud shadow flag.
Here, we present applications of DARCLOS on air quality measurements of TROPOMI, in particular, on the NO2 vertical column and the absorbing aerosol index (AAI). Using the PCSFs of DARCLOS, we show how we remove shadow effects from TROPOMI air quality measurements. Using the ACSFs and SCSFs of DARCLOS, we select shadow pixels for the analysis of the shadow effect on air quality measurements. We compare the detected shadow pixels with the signatures visually found in air quality maps and we discuss the results using the ACSF and using the SCSF. Finally, we show a first quantification of the cloud shadow induced bias on NO2 and the AAI, by comparing the measurements in the cloud shadows to the measurements outside the cloud shadows.
Sulphur dioxide (SO2) emissions are a big problem. SO2 is harmful for our planet, environment and human health. SO2 is emitted into the atmosphere, primarily from burning fossil fuels. There are many regulations outlined by governments and policy makers to reduce this dangerous gas.
In order to ensure the regulations are effective, a reliable monitoring service is required to detect changes and trends in the SO2 level.
Earth Observation (EO) technologies offer a good fit to acquire daily measurements of SO2 across the world. There are various satellite-based sensors such as Sentinel-5P that offer observation for SO2. However, due to the short-life of SO2 in the atmosphere (13 h in summer and 48 h in winter), the background level of SO2 close to the ground surface is very low. Therefore, although Sentinel-5P captures images of the Earth on a daily basis, it might not necessarily observe consistent SO2 concentrations. In addition, the spatial resolution of Sentinel-5P data (3.5x5 km2) is not high enough to assess some specific applications (e.g. infrastructure level monitoring), and data products with higher spatial resolution are required. Therefore, there is a strong need for the development of methods to model and map SO2 by incorporating other data sources (e.g. ground-based measurements) along with Sentinel-5P data to generate enhanced SO2 maps with higher spatial and temporal resolution.
In this project, we are developing a multi-step satellite-based Artificial Intelligent approach to estimate daily SO2 levels. This would be demonstrated over a test site in Eastern Europe by an output of modelled SO2 concentration on a daily basis at a 1x1 km2 spatial resolution.
The desired end goal of this project is to have a model that can operationally output SO2 concentration at a 1x1 km2 spatial resolution on a daily basis worldwide. This goal would ultimately be demonstrated by operating a workflow that can use AI for the analysis of EO and additional non-EO datasets to automatically generate the SO2 concentrations.
Climate change is one of the biggest challenges for agriculture in West Africa. Traditional agriculture, based on the rainy season, is threatened by rising temperatures and increasingly variable occurrence of precipitation. Due to climate change the normally regular occurrence of the rainy season is shifting, the growing season is becoming shorter and is increasingly interrupted by dry spells during the rainy season. Crop losses and crop failures are the consequences. EO has the potential to identify and quantify the impact of precipitation anomalies on agriculture area and identify spatial and temporal patterns of area affected by precipitation deficits. From these patterns, trends can be derived and a future agricultural use can be evaluated. In addition, vulnerable areas can be identified and adaptation measures can be taken in order to secure future harvest. In this study agricultural productivity and the impact of precipitation anomalies are investigated by a combination of EO and climate data in Burkina Faso. Therefore LAI (Leaf Area Index) data are derived from multi-temporal Sentinel-2 data and phenological time series are evaluated regarding the effects of precipitation deficits. Crop failure can be caused not only by drought but also by heavy precipitation events, erosion, pests or mismanagement. Therefore, additional climate data regarding the occurrence of drought events will be analyzed in order to clearly attribute the possible decline in productivity to a precipitation deficit. Not only the length of drought events is considered, but also their severity. For this purpose, the Standardized Precipitation Evapotranspiration Index (SPEI) is calculated, which includes not only precipitation data and temperature but also evapotranspiration. SPEI is calculated for the period 1979 to present using ERA5 data and in addition 2 CORDEX climate scenarios (RCP 4.5 and RCP 8.5) are evaluated in terms of drought events until 2050. The results show that drought events are increasing in both length and severity – countrywide. Evaluations of the Sentinel-2 derived LAI on agricultural area show that crop productivity is significantly decreasing within areas affected by drought events. Based on these results, recommendations for policy makers and NGOs can be formulated to initiate early adaptation measures in the affected area to minimize future crop losses due to precipitation deficits. The study shows that the combination of EO data and climate data can be used successfully to identify agricultural areas affected by drought events and provide a potential assessment for future agriculture activities.
FRESCO has been developed to retrieve cloud parameters from satellite spectrometers. FRESCO data are mainly used to correct cloud effects on trace gas retrievals, and to filter clouds in trace gas and aerosol retrievals. The FRESCO algorithm has been updated and implemented in the GOME-2 level 1 data processor (PPF) at EUMETSAT. The latest development is the FRESCO for Sentinels (FRESCO-S).
There are significant changes in the latest implementation of FRESCO for S5P/S5. The main reasons for the changes are the high spectral resolution (narrower instrument spectral response function) and the variation of actual wavelength grid of S5P. FRESCO-S products include effective cloud fraction, cloud height (cloud pressure), scene albedo, scene pressure. These parameters can be retrieved from the O2 A and B bands, with only the O2 A-band currently used for TROPOMI. It is also possible to run FRESCO-S for the O2 B band if proper configuration file is used. The retrieval algorithm has been changed from Levenberg-Marquardt to optimal estimation. The Directional Lambertian Equivalent Reflectance (DLER) derived from TROPOMI has been implemented in FRESCO-S for TROPOMI (GOME-2 DLER for S5), instead of the GOME-2 LER (used in TROPOMI processor v1.4 and earlier). Another improvement is about the a priori values for the cloud parameters. In FRESCO-S we use fixed a priori values for scene parameters and cloud parameters. In the new version, we use retrieved scene parameters as a priori values for the cloud parameters.
The TROPOMI DLER (also LER) has a spatial resolution of 0.1 degree, which is much higher than the spatial resolution of the GOME-2 DLER/LER (of 0.25 degree). Note that the DLER is only available over land, over ocean the DLER is the same as the LER. The effective cloud fraction product retrieved using TROPOMI DLER shows less surface features especially for cloud free scenes. The cross track biases in the cloud pressure and effective cloud fraction products are reduced after using the TROPOMI DLER. The improvements of the FRESCO-S cloud parameters due to the DLER are mainly over land for partly cloudy pixels. In the presentation we will show the latest development of FRESCO-S and some results.
Remote sensing of atmospheric trace gases yields valuable information about the chemical composition of our atmosphere. Especially in urban areas, where poor air quality is still a public health concern, there is a demand for trace gas mapping with high spatio-temporal resolution. Monitoring of air pollution on a global scale can be achieved using satellite instruments such as TROPOMI on board of the S5P satellite. These instruments measure sunlight reflected from the surface of the earth with spectral resolutions in the sub-nanometer region, a pixel size of 5.5 km × 3.5 km and a repetition time of one day in (near) nadir geometry. The recorded spectra are evaluated using Differential Optical Absorption Spectroscopy (DOAS), which quantifies trace gases in the FOV of the satellite by fitting the absorption cross sections of the target gasses to the measured differential absorption structures.
To reconstruct trace gas profiles on the basis of such satellite measurement is typically phrased as an inverse problem: A set of hidden variables x (here, the NO₂ concentrations at fixed altitudes) is turned into an observation y (here, a NO₂ VCD) under an observation operator H. How can the hidden variables x that have generated a new observation y be determined, i.e. what is H⁻¹(y)?
Neural Networks have proven to be very powerful function approximators. Given a large enough set of training data, a Neural Network can be trained to learn an inversion model F⁻¹ ≅ H⁻¹. Such a Machine Learning based approach could capture extremely complex relationships between hidden variables and observations. This could greatly improve both the prediction accuracy and certainty of an inversion model. Furthermore, it would allow to combine satellite measurements of the target trace gas VCD with further observation of atmospheric variables (temperature profiles, other trace gas columns, boundary layer height, ground reflectance, etc.) in order to further increase the performance of the inverse model. The inclusion of such additional observations also helps to overcome a typical problem that is encountered when solving inverse problems with explicit learning approaches. Standard network topologies require explicit learning of the inverse process e.g. by optimization of a loss function L(x, F⁻¹(y))$ on the true hidden variables x and predictions F⁻¹(y). By learning the inverse process explicitly, degeneracies of H remain unresolved in the inversion model F⁻¹. That means if two state variables are mapped to the same observation under H, the Neural Network will typically map that observation to either a weighted mean or only a single element of its preimage ("mode collapse"). Including additional observations of the atmosphere allows the Neural Network to distinguish NO₂ concentration profiles with identical VCD based on the difference in the remaining observations.
We propose a new inversion model for tropospheric NO₂ concentration profiles on the basis of TROPOMI observations and additional measurements of descriptive atmospheric variables (temperature profiles, other trace gas columns, boundary layer height, ground reflectance, etc.) using Feed-Forward Neural Networks. A training set is generated by forward-modelling the state of the atmosphere using WRF-Chem. The Neural Network achieves relative prediction errors of < 20% for altitudes of up to 1.5 km. We present a validation study, in which ground NO₂ concentrations measured by stationary instruments in Germany are compared to ground concentrations extracted from the NO₂ profiles of the Neural Network using real TROPOMI observations as input.
If you wanted to launch a satellite delivering data in Near-Real-Time (NRT), you would need to ensure that the whole processing system is ready before the actual launch. However, how to ensure that a complicated algorithm is infallible? Our answer for the Sentinel-5 is cross-verification.
The Copernicus Sentinel-5 (S5) will be launched on-board the EUMETSAT Polar System-Second Generation A (EPS-SG A) satellites. The first of the planned three will be launched in 2024. That makes it a challenge not to miss any important details.
Indeed, a lot of details stem from a number of products. The S5 inherits strongly from the Sentinel-5 Precursor (S5P), but there will be also glyoxal (OCHCHO), aerosol optical depth, surface reflectance, a cloud mask from the METimage instrument (also on-board of the EPS-SG) included, and the methane algorithm will contain proxy-based retrieval as well. Therefore, the data needs extensive cross-verification.
At the beginning, the Algorithm Developers create a theoretical baseline. They also work on the first version of the code, called the breadboard. The breadboard doesn’t need to be optimised to comply with the stringent time requirement of NRT processing, but it is scientifically correct. Later on, an industry contractor creates a Prototype Processor (PP). The PP includes already some optimisation and parallelisation. The PP outputs need to be verified versus the breadboard outputs. In the next step, EUMETSAT writes the specification and another contractor creates an Operational Processor (OP), which will be integrated into the whole ground system of the EPS-SG.
Here is where the cross-verification takes place. We will outline the reality of the Level-2 processing development, where we deal with non-compliant formats, where we need to understand the algorithms of all the products so we can trace back any inconsistency, where we set up the acceptance criteria for the cross-verification. The activities could also serve as lessons learnt for the upcoming Calibration and Validation (Cal/Val).
This study focuses on improvements in the tropospheric NO2 retrieval over Europe from TROPOMI measurements. Here, we present an overview of the DLR NO2 retrieval algorithm and validation with ground-based measurements. Furthermore, the use of TROPOMI tropospheric NO2 columns for air quality purposes in Europe will be discussed.
The DLR NO2 retrieval algorithm for TROPOMI consists of mainly three steps: (1) the spectral fitting of the slant column based on the differential optical absorption spectroscopy (DOAS) method, (2) the separation of stratospheric and tropospheric contributions, and (3) the conversion of the slant column to a vertical column using an air mass factor (AMF) calculation. To calculate the NO2 slant columns, a 405-465 nm fitting window is applied in the DOAS fit for consistency with other NO2 retrievals from OMI and TROPOMI. Absorption cross-sections of interfering species and a linear intensity offset correction are applied. The stratospheric NO2 columns are estimated using a directionally dependent STRatospheric Estimation Algorithm from Mainz (DSTREAM) method to correct for the dependency of the stratospheric NO2 on the viewing geometry. For AMF computation, the climatological OMI surface albedo database is replaced by the geometry-dependent effective Lambertian equivalent reflectivity (GE_LER) and the directionally dependent (DLER) data obtained from TROPOMI measurements. As surface albedo is an important parameter for accurate retrieval of trace gas columns, the effect of surface albedo in TROPOMI NO2 retrieval is investigated by comparing results applying different surface albedo datasets. Mesoscale-resolution a priori NO2 profiles are obtained from the regional chemistry transport model POLYPHEMUS/DLR and LOTOS-EUROS. Based on the latest TROPOMI operational cloud parameters, a more realistic cloud treatment is provided by a clouds-as-layers (CAL) model, which treats the clouds as uniform layers of water droplets, instead of the clouds-as-reflecting-boundaries (CRB) model, in which clouds are simplified as Lambertian reflectors.
Validation of the TROPOMI tropospheric NO2 columns is performed by comparisons with ground-based MAX-DOAS measurements at nine European stations with urban/suburban conditions. The improved DLR tropospheric NO2 product shows a similar seasonal variation and good agreement with MAX-DOAS measurements. In particular, the results applying a priori NO2 profiles from the regional model with a high spatial resolution and recent emission inventory improve an underestimation in TROPOMI tropospheric NO2 columns in polluted urban areas.
Wildfires are part of the natural carbon cycling mechanism and contribute to the sustainability of terrestrial biomes. However, forest fires can also be initiated by anthropogenic activities. In both cases, natural and human-driven fires are emitting into the atmosphere large amounts of pollutants gases, like carbon monoxide (CO) and nitrogen oxides (NOx), or particles, like black and organic carbon (BC and OC). CO is one of the main pollutants of the atmosphere playing a major role in atmospheric chemistry and air pollution by impacting the concentrations of atmospheric oxidants, thereby affecting methane’s (CH4) chemical sink and CH4 concentrations. CO also contributes to ozone (O3) formation, provided enough NOx is available.
Every summer, the Mediterranean region is subject to a large number of fires that vary in strength, air quality degradation, economic impacts and human lives losses. Roughly 600 to 8000 thousand hectares are burnt every year in the Mediterranean region (WWF, 2004). This area is almost equal to about 1.5% of the total Mediterranean forests. Fire events can be observed from space, both in terms of the geolocation of hot surfaces seen by the MODIS instrument onboard TERRA and AQUA satellites and their chemical footprint denoted by the enhancement of NOx (as nitrogen dioxide- NO2), CO and absorbing aerosol (BC and OC) seen for instance by the TROPOMI instrument onboard the Sentinel 5 Precursor (S5P) satellite.
In July 2018, one of the deadliest fires of the last decade took place in Greece, with dozens of human casualties and large burned areas. Two fires occurred in the Attica region, at Kineta (37o 58’N, 23o 12’E) and Mati (38o02’N, 23o59’E), on the 23rd of July and lasted until the 26th of July. During these fires, large quantities of CO were released into the atmosphere. In the present study, a data assimilation method is used to evaluate the biomass burning emissions over Greece during the summer of 2018.
For these simulations, we are applying the widely used Weather Research and Forecasting model WRFv4.3 with the oversimplified GreenHouse Gas (GHG) chemical scheme that calculates CO2, CH4 and CO in the atmosphere coupled with the CarbonTracker Data Assimilation Shell (CTDAS). Biomass burning emissions from the Fire INventory from NCAR (FINNv2.4) are used as a-priori emissions, and satellite data from the TROPOMI-Sentinel-5P are assimilated in the model. The simulation domain covers the East Mediterranean in a resolution of 36x36 km, with a nest over Greece with a 12x12 km resolution. Chemical boundaries for the large model domain are taken from the global model TM5. ERA5 ECMWF meteorological conditions are used as boundaries, and the period between June and August 2018 is assimilated using an assimilation window of one week.
Acknowledgments. This work is funded by the Action “National Network on Climate Change and its Impacts (CLIMPACT)” implemented under the project “Infrastructure of national research networks in the fields of Precision Medicine, Quantum Technology and Climate Change”, funded by the Public Investment Program of Greece, General Secretary of Research and Technology/Ministry of Development and Investments; and by the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy (University Allowance, EXC 2077, University of Bremen). The computations and simulations were performed on the HPC cluster Aether at the University of Bremen, financed by DFG within the scope of the Excellence Initiative.
Recent years have seen important breakthroughs in the detection from space of point sources of atmospheric pollutants, such as nitrogen dioxide (NO2), sulfur dioxide (SO2), and ammonia (NH3). These have been achieved owing to the high spatial sampling of nadir-viewing satellite sounders, combined to the implementation of oversampling techniques that allow increasing the spatial resolution of satellite data beyond the native resolution of the sounder. In this context, ethylene (ethene, C2H4), a high-yield precursor of formaldehyde and tropospheric ozone, shows up as a complementary short-lived tracer for tracking down point sources of reactive carbonated pollutants.
Emitted by fires and produced by plants, ethylene also emanates from incomplete combustion of biofuels and leakage from industrial processes. It is indeed a key compound in the modern chemical industry as it serves as the main building block of numerous products, including polymers, plastics, and a large suite of chemicals. Along with propylene, ethylene is the most abundant industrially produced organic compound, with a global production of 150-180 Mt yr-1.
The spatially dense measurements from the Infrared Atmospheric Sounding Interferometer (IASI), embarked on the polar-orbiting, Sun-synchronous Metop satellite platforms, offer the opportunity to monitor ethylene at the global scale. However, its column abundance is challenging to retrieve owing to its weak absorption in the thermal infrared. Here, we detect and quantify the signal strength of ethylene in the IASI spectra owing to a sensitive hyperspectral range index (HRI) that exploits all the channels in which ethylene absorbs. Then, we make use of an artificial neural network to convert this HRI to gas vertical abundance.
We apply to the entire IASI time series of ethylene HRIs, a wind-adjusted super-resolution technique that is even more effective in locating the sources of a target emitter than regular oversampling. This allows us to detect for the first time point sources of ethylene from space. Those are found to be mainly associated with heavy industries, in particular with petrochemical clusters. In such clusters, where ethylene is produced by steam cracking of petroleum hydrocarbons and natural gas, releases to the atmosphere can occur through gas leakage, flaring, and stack exhaust. Coal-related industries and steel plants are also identified as substantial ethylene emitters, whereas other point sources are associated with megacities. Overall, we have identified and categorized over 300 ethylene hotspots.
Finally, IASI-based fluxes of ethylene are calculated over a selection of hotspots, and compared with the emissions from the state-of-the-art anthropogenic inventory EDGAR v4.3.2. The comparison reveals an important mismatch between the industrial fluxes captured by IASI and the EDGAR emissions of ethylene that are often dominated by non-industrial sectors, such as transportation and domestic heating.
The scheme developed by RAL to retrieve height-resolved ozone data from satellite uv sounders has previously been applied to a series of instruments in sun-synchronous polar orbit (GOME, SCIAMACHY, OMI, GOME-2A and -2B) to produce data for ESA's Climate Change Initiative and the Copernicus Climate Change Service. The capability of this scheme to resolve the surface - 450hPa layer from higher layers has been exploited in studies of tropospheric ozone. Developments are underway to improve accuracy and stability prior to reprocessing of multi-year data sets to support future studies of climate-composition interaction. The uv scheme has been re-engineered to serve as ESA's prototype processors for Sentinels 4 and 5 which will extend ozone monitoring through the next two decades and this has been initially applied to Sentinel-5P. The extended version of RAL's Infrared and Microwave Sounding scheme retrieves height-resolved ozone profiles along with temperature and water vapour profiles together with column amounts of several traces gases and aerosols from IASI, MHS and AMSU on MetOp. Developments are in progress to combine co-located observations by GOME-2 and IASI in the uv/vis and ir regions which have different vertical sensitivities to increase vertical resolution on ozone in the lower troposphere to produce data to better meet the needs of air quality and atmosphere-biosphere applications. Combined uv-ir approaches could potentially be applicable to ozone retrieval from the new generation of European satellites, MetOp-SG and MTG-S. This presentation will report progress on improving RAL's uv scheme and combining wavelengths including comparisons with CAMS and correlative measurements.”
The nadir viewing TROPOMI spectrometer aboard the S5p satellite since October 2017, observes the Earth with high spatial resolution and a daily coverage. We use operational level 2 data of GODFIT total ozone and OCRA/ROCINN CRB (cloud reflecting boundary) cloud fraction/height (versions ≤ 2.2.x) to retrieve tropospheric ozone using the convective cloud differential method (CCD) and the cloud slicing algorithm (CSA). We retrieve tropical tropospheric column ozone (TTCO) [DU] using the CHORA-CCD (Cloud Height adjusted Ozone Reference Algorithm) and upper tropospheric ozone volume mixing ratios (TTO) [ppbv] using the CHOVA-CSA (Cloud Height induced Ozone Variation Analysis). The algorithms are based upon techniques developed by Ziemke et al. (1998, 2001).
Temporal sampling/averaging of cloud/ozone data is not necessary anymore due to the high amount of daily S5p measurements. For the CHOVA algorithm, data are spatially sampled on a 3° latitude/longitude grid with 2° step size to retrieve upper tropospheric ozone above 5 km. The retrieval results are used to calculate monthly mean volume mixing ratios in the Pacific sector to height adjust the CHORA above cloud column ozone (ACCO) to the fixed pressure level of 270 hPa.
The original CHORA algorithm has been optimized for TROPOMI where in a post-processing step, the complete ACCO field (latitude, time) is interpolated and smoothed to reduce data gaps and scatter in the daily ACCO vectors. Daily total ozone is averaged for a small grid box size of 0.5° x 0.5° to minimize errors from stratospheric ozone changes. All datasets have been successfully validated against the SHADOZ ozone sonde profiles with low biases (< 5%) and a larger dispersion (< 25%), both within the accuracy requirements.
The new CHORA-LC (Local Cloud) calculates the ACCO in the vicinity of the measurement area instead of the distant Pacific reference sector. The daily ACCO transfers from a vector of latitude to a matrix in both directions. First results show decreases in bias and dispersion in comparison to ozone sonde data. The goal to extend the application beyond the tropical belt will first be tested.
Part of this work is funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) via the TROPO3-MIDLAT project. The work on TROPOMI/S5P geophysical products is funded by ESA and national contributions from the Netherlands, Germany, Belgium, and Finland. We thank the NASA/GSFC SHADOZ team for providing ozone sonde data.
Traffic emissions remain a large contributor to air pollution, and therefore to public health risks, in European urban areas. The spread and severity of the COVID-19 pandemic has prompted decision makers to take measures to reduce contacts between people. The measures resulted in decreased vehicular traffic and diminished traffic-related emissions, particularly of nitrogen dioxide. Several studies have already looked into the reduction in nitrogen dioxide ambient concentrations during the lockdown periods. Despite disparate methodologies (e.g. if and how natural variation induced by the meteorology is takes into consideration) and heterogeneous ranges, the case for reduced ambient nitrogen dioxide concentrations is evident. Such an information can be viewed by policy makers as a large-scale, real-life experiment from which much needed traffic reduction policies can be designed.
However extensively nitrogen dioxide has been studied, ground-level tropospheric ozone, another pollutant very hazardous to human health, has received comparatively little attention. Its nature as a secondary pollutant resulting from a complex, non-linear chain of chemical processes having nitrogen oxides and volatile organic compounds at its core, makes it more difficult to draw relationships between changes in ozone concentrations and changes in traffic patterns. Indeed, the magnitude and signal of the change in ozone concentrations will depend not only on the change in nitrogen oxides emissions from traffic, but also on the levels of volatile organic compounds.
Formaldehyde is a volatile organic compound which is an intermediate oxidation product of other volatile organic compounds and can be used as a proxy for their presence. In the present work, we use total formaldehyde columns derived from the Ozone Monitoring Instrument on board the AURA satellite and study the effect of covid-19-related lockdowns in Europe. Our results are useful to understand the changes to ozone levels that were observed during the lockdowns and inform policy design on the effect of traffic altering policies.
Carbon monoxide in high concentrations can have severe health effects. For this reason there are regulations for the monitoring of it, along with several other gases and particles with negative health effects. However, as the anthropogenic emissions of carbon monoxide have declined, the need for monitoring has diminished and no monitoring station in Finland measures it anymore. Therefore, the air quality regarding CO levels needs to be evaluated by other methods.
In this work we have used carbon monoxide measurements from TROPOMI instrument on-board Sentinel-5P. We checked the TROPOMI CO total column mixing ratios against ground-based references in Sodankylä, Finland. The Arctic Space Centre of Finnish Meteorological Institute hosts there two Fourier Transform Spectrometers: a high-resolution Bruker IFS125HR and a low-resolution EM27/SUN.
We have used the TROPOMI data to support objective air quality assessment in two different ways. First we calculated a long-term average over Finland in 2x2 km grid to find the average CO distribution and to recognize areas with possible larger sources. For these areas we could assess the need for continuous monitoring. The long term average maps were also compared to emission database data.
The second way we utilized satellite observations was to estimate ground level CO concentrations. We used the ground level data from scientific monitoring stations in Helsinki, Sodankylä and Pallas to calculate a simple linear relation between ground level concentrations and the total column concentrations provided by satellite measurements. Using this relation we evaluated yearly average and maximum ground level concentrations for specific monitoring regions.
The average concentrations were lower for the northern Finland with lower population density and less traffic. The long-term average map shared similar features as Edgar emission database. However, the average concentrations had really small variability over Finland, ranging from 86 to 92 ppb. The maximum concentrations can be related to long range transport from for example an intense wild fire region and thus do not follow any clear pattern.
In the frame of the AQ-WATCH H2020 European project, we initiated the development of the SAMARITAN (Space-bAsed Monitoring of AiR qualITy using mAchine learniNg) software, that aims at estimating surface NO2 concentrations in different regions of the world, based on machine learning (ML) models fed with in-situ surface observations combined with a variety of Earth Observations (EO) and non-EO data, including TROPOMI NO2 tropospheric columns, ERA5 meteorological reanalysis, CAMSRA global NO2 reanalysis from Copernicus Atmospheric Monitoring Service (CAMS), urban fraction from the Copernicus Land Monitoring Service (CLMS) and population data from the Joint Research Center’s (JRC) Global Human Settlement Layer (GHSL) dataset. The use of such sophisticated ML models appears justified by the reasonable although limited correlation between NO2 tropospheric columns observed from space and collocated surface NO2 concentrations.
We applied SAMARITAN to several regions of the world, including Central Chile, northern Italy and Catalonia. Consistent results are found in these different case studies. Besides the estimation of the expected performance of our ML models with cross-validation, we designed a simple yet intuitive approach for identifying regions where our ML predictions are more doubtful due to environmental conditions very different from those encountered in the training data. At the end, SAMARITAN is able to substantially extend the initial spatial coverage offered by the surface monitoring networks and capture a large part of the spatio-temporal variability of surface NO2 concentrations. This tool thus offers an interesting data-driven approach for monitoring the NO2 pollution prevailing at the surface, especially in regions where geophysical modelling systems are not mature enough, for instance due to limited information on the local emissions.
Atmospheric pollutants and climate active species such as black carbon are emitted by gas flaring in several regions of the world. Gas flaring occurs e.g. in the upstream oil and gas extraction industry when the gas present in the underground oil reservoir (called the associated petroleum gas) is not to be recovered for economical, logistical or other reasons. Gas flaring is a practice also used in other industries, e.g. in refineries, and for other reasons, e.g. as a security measure.
Gas flaring takes place under very high temperatures (around 1500 K), producing a strong radiative signal in the short wave infrared which can be observed from space by radiometers such as the Sea and Land Surface Temperature Radiometer (SLSTR) on-board the Sentinel-3 satellites.
The first version of the GFlaringS3 dataset, made available to the community at the portal Emissions of atmospheric Compounds and Compilation of Ancillary Data (ECCAD, https://eccad3.sedoo.fr/#GFlaringS3), used nighttime SLSTR short-wave infrared data to detect gas flares worldwide. The other short-wave, mid-wave and thermal infrared channels are used to confirm the detection and to characterize the detected flare in terms of temperature and area, using a dual Planck curve fitting. The activity is subsequently computed using a modified version of the Stefan-Boltzmann equation as input for a calibration originally derived for the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor. Black carbon emissions are calculated using the scale of emission factors reported in the literature, scaled by the flaring temperature used as a proxy for combustion efficiency.
Finally, the gathered data is projected onto a 0.025 times 0.025 degrees global grid and yearly estimates are produced: minimum, best estimate (scaled latitude-wise for cloud cover and overpass frequency) and maximum. In the present work, we present a dataset updated with methodological improvements.
Combustion releases energy in the form of heat (conduction and convection), and electromagnetic radiation. Some of the radiation emitted by the combustion is registered by the sensor, after being attenuated by the atmosphere. The radiance registered by the sensor can then be expressed as a direct function of the burned fuel mass flow, its heating value and the proportion of energy radiated in the combustion process, attenuated by atmospheric transmittance and combustion efficiency. Spectral radiance at the sensor will also be impacted by the bandwidth (proportion of energy radiated in the sensor spectral band, which is the ratio between the integral below the Planck curve at the band width and the total integral) and the ground sampling area. The proportion of energy radiated in the combustion process and the combustion efficiency are functions of the combustion temperature, atmospheric transmittance is a function of the wavelength, and the proportion of energy radiated in the sensor spectral band is a function of both.
Worldwide data on the chemical composition of the associated petroleum gas was gathered from the literature and a global gridded dataset of associated petroleum gas heating values was produced. That information was used, in conjunction with the detected flare temperature, to estimate the combustion efficiency. Atmospheric transmittance was estimated from a simplified climatology. The proportion of combustion energy radiated was assessed using literature values.
The heating value, and thus the adiabatic flame temperature, specific to the geographical location of the flare, was used to evaluate the black carbon emission factor. On the one hand, the adiabatic flame temperature was used as the cap for the maximum black carbon emission factor, instead of a single generic temperature value, and, on the other hand, as input to two equations derived from laboratory flames and described in the literature. Thus ensuing in three different modelled global black carbon emissions.
The revised methodology results in a refined gas flaring activity and emissions dataset. The updated dataset can be used as emissions input for models, as an information tool for local communities, among other usages.
Smoke emitted from fires, even thousands of kilometres away from the source, can have a significant impact on the nearby vegetation. This is particularly the case with vineyards since the wine produced by smoke affected grapes will see its taste highly modified.
That is why we would like to be able to observe the path of smoke after a fire to predict which cultures will be affected. Thankfully, the Copernicus Atmospheric Monitoring Service (CAMS) provides a large amount of atmospheric data extracted from various physical models, satellites data and in-situ observations.
A first study from the French Space Centre (CNES) showed that specific data from CAMS (which indicates the localization of smoke every hour coming from a selected fire, at the ground level but also at several highs from the ground) is useful to determine affected vineyards. The work presented here consists on expanding this proof of concept study by developing automatic tools to easily retrieve more data from CAMS API, visualize it, extract localized data and test the methodology on a large number of fires.
To this end, we used the API provided by CAMS and developed an algorithm that uses as input a file delimitating the crop borders, a time range and the selected models and parameters available from the service – for example: surface carbon monoxide concentration estimated by the MOCAGE model - and produce various useful visualizations of the data and finally extract the data over each crop parcel in the input file. To make it easier to use, a QGIS plugin was also developed to launch the previous algorithm directly from the QGIS interface.
A study of the results obtained from a list of recent known fires from diverse areas of the world and different intensities was then completed to extract the best models and atmospheric parameters to retrieve in order to observe the smoke movement. Ultimately, we added observations from the Sentinel-5P and IASI satellites to the comparison and concluded on what to choose depending on the type of fire to study.
This validation work on known fires allowed for a better assessment of the capabilities of CAMS data for smoke observation and confirmed that our work could realistically help farmers to prove and measure the impact of smoke on their crops.
The hydroxyl radical (OH) is one of the most important species in atmospheric chemistry. It dominates the oxidation of many different tropospheric species, including greenhouse gases (e.g. methane), anthropogenic and natural pollutants and ozone-depleting substances. Measuring OH in the troposphere is difficult due to its short lifetime (~1 second in the daytime) and low abundance (global mean OH concentrations are around 1 ×106 molecule cm-3). The current wealth of tropospheric chemistry measurements retrieved from satellite over recent decades provides an opportunity to learn more about the spatial and temporal variation of different species, as well as indirectly deriving information on trace gases such as OH. Other measurements of OH are limited to isolated field and infrequent aircraft campaigns, so this approach using satellite data provides us with a global observational perspective.
Here, we use a simplified steady state approximation to estimate the OH global distribution. The use of a steady state approximation is suitable due to the short lifetime of OH. OH has a large number of sources and sinks in the troposphere but not all of these species are retrieved by satellite. Therefore, we use a simplified approximation which only considers the dominant tropospheric sources and sinks, including ozone (O3), water vapour (H2O), carbon monoxide (CO) and methane (CH4).
Alongside satellite data (IASI - MetOp), we use a 3D model (TOMCAT) and aircraft campaign data (Atmospheric Tomography Mission – ATom). We find that the satellite derived OH distribution estimated by this simplified approximation shows good agreement (i.e. within ~20%) to modelled [OH] in certain regions of the atmosphere, specifically the mid-troposphere, around 600-700 hPa. In this region, a good correlation is also found when ATom data is applied to the simplified approximation compared to ATom OH observations across the campaigns in 2016-2018 (r = 0.78).
Long-term (i.e. 10 years) application of the simplified approximation to IASI in the 600-700 hPa pressure region highlights inferred [OH] anomaly inter-annual variability ranging between -3.1% and +4.4%. In this region there are large positive anomalies in 2010 and 2012/13 and large negative anomalies in 2009, 2011 and 2015/16. The variation in the [OH] source term is dominated by O3 inter-annual variability. CO is the dominant sink species throughout the time period, causing a substantial decrease in inferred [OH] in 2015/16 due to large-scale CO emissions from wildfires during the strong El Niño event in those years.
Biomass burning events are known to emit large amounts of trace gases and aerosols into the atmosphere, which have a range of adverse impacts on human and ecosystem health, as well as on climate. For these reasons, it is important to model correctly the transport of the emitted plumes and hence the knowledge of the plume height is key. The altitude at which fire plumes are transported also impacts the lifetime of the emitted species and therefore the formation of secondary pollutants along transport; in that respect it is a critical input to chemistry transport models. The goal of this work is to obtain precise information on the altitude of fire plumes, using the information contained in the hyperspectral infrared measurements of the IASI nadir-viewing satellite sounder, which has the advantage of providing a bidaily global coverage.
The spectral range where carbon monoxide (CO) absorbs is more specifically exploited. A CO vertical profile is retrieved with the maximum possible resolution, in such a way to estimate the plume altitude. Two approaches are developed, both relying on the Optimal Estimation Method. The first consists in optimising the partials columns every km with very weak correlation between layers to allow a local increase in the concentration. The second method assumes that the CO profile can be represented by a Gaussian on top of the background profile. The altitude and the amplitude of the Gaussian are fitted in the inversion step. The two approaches will be described and compared in terms of retrieved height, using a series of representative cases of transported fire plumes (2019-2020 fires in Australia, 2021 fires in California). The comparison exercise also includes the products from the operational FORLI processing, providing CO vertical profiles in near-real time. The retrieved altitudes are finally compared with vertically-resolved CALIPSO data for validation. We report a good agreement for plumes below 10 km and somewhat larger differences at higher altitudes. The possibility of implementing the method in an operational framework will be discussed.
The growing fleet of Earth Observation (EO) satellites are capturing unprecedented quantities of information about the concentration and distribution of trace gases in the Earth’s atmosphere. Future geostationary instruments such as Sentinel-4 and TEMPO, and current LEO instruments such as Sentinel 5P/TROPOMI and Suomi NPP CrIS will or currently capture millions of spectra daily. The algorithms required to convert the spectra into trace gas concentrations (known as retrieval algorithms) are computationally intensive, typically requiring between less than a minute to an hour to generate a trace gas concentration estimate per spectrum. Therefore, the resources required to process the volumes of data being captured by EO satellites are growing exponentially, and it is currently impractical to process all captured spectra in a timely manner. This has led to efforts to simplify, speed up and make the process of generating trace gas estimates from satellite instrument spectra more efficient.
In order to be used in scientific analysis, the retrieved trace gas concentrations must pass a series of quality control criteria, usually unique to each algorithm/instrument. However, it is typically not possible to determine whether or not a retrieved trace gas quantity is of good or poor quality prior to the completion of the retrieval procedure. This means a poor-quality retrieval requires an amount of resources equal to those needed for good quality retrievals, which represents a significant drain. For example, the CrIS and TROPOMI ozone retrievals currently generated by the Multi-Spectra, Multi-Species, Multi-Sensor (MUSES) algorithm as a part of the TRopospheric Ozone and its Precursors from Earth System Sounding (TROPESS) project can expect roughly 20% of daily retrievals to fail quality criteria, which represents a significant fraction. Why 20% of these retrievals and not others fail remains unclear, and further insight into the conditions that cause retrieval failure could help streamline the retrieval process.
In this study, we investigate the use of machine learning techniques to predict failures in the trace gas retrieval process as a pre-screening step, and to further understand the conditions that cause poor quality retrievals. Spectra could be screened prior to processing and rejected if failure is predicted, requiring a fraction of a second and freeing up significant resources. We use the Tree-Based Pipeline Optimization Tool (TPOT), an automated machine learning package in python that optimizes the machine learning pipeline. It automates the most laborious aspects of machine learning, by exploring thousands of possible combinations of choosing the machine learning model (e.g. neural networks or random forests) and the preprocessing steps (e.g., dimensionality reduction method) to find the best possible combination for the specific problem.
A training dataset of ~50000 L1b spectra from the CrIS (a Thermal Infrared (TIR)) and TROPOMI (a Ultra Violet-Shortwave Infrared (UV-SWIR) instruments and retrievals from the MUSES algorithm are used to build a TPOT pipeline that can predict the failure or success of ozone retrievals from CrIS and TROPOMI. With this tool we can correctly identify 50% of ozone retrieval failures using the MUSES algorithm, at a cost of 10% false positives. These results suggest high applicability of failure prediction tools to EO satellites, and can potentially reduce processing overheads in the future.
© 2021 California Institute of Technology. US government sponsorship acknowledged.
The upper troposphere – lower stratosphere (UTLS) is a very important region since climate is strongly sensitive to the atmospheric composition in this region, while the transport of air from the stratosphere to the troposphere has also an impact on the air quality, being a source of variability mainly in tropospheric ozone.
However, the UTLS is difficult to explore from space. Instruments looking through the UTLS in the nadir direction do not have the capability to resolve vertical details, while limb-viewing instruments are limited by the presence of clouds in the upper troposphere.
The Changing-Atmosphere InfraRed Tomography explorer CAIRT, one of the four candidates selected for phase 0 of ESA Earth Explorer 11, could be the first limb-sounder with imaging Fourier-transform infrared technology in space. If selected, CAIRT will measure limb radiances from 5 to 115 km altitude with a nominal horizontal sampling of about 50 by 50 km (possible subsampling for selected science targets) and a vertical sampling of 1 km. The spectral range will cover the region from 720 cm-1 to 2200 cm-1 (4.5 to 14 μm in wavelength), with a spectral resolution of 0.1 cm-1. With a targeted across-track coverage of 500 km, near-global coverage is achieved within about two days. Tomographic retrievals will provide temperature and trace gas profiles at a much higher horizontal resolution and coverage than achieved from space so far, enabling us to address new science questions. Flying in loose formation with the Second Generation Meteorological Operational Satellite (MetOp-SG) will enable combined retrievals with observations by the New Generation Infrared Atmospheric Sounding Interferometer (IASI-NG), as well as from Sentinel-5 nadir sounder, resulting in consistent atmospheric profile information from the surface to the lower thermosphere. This will open new scientific opportunities to study trace gas transport and transformations in the lower stratosphere and troposphere.
The paper will describe how the synergy of CAIRT, IASI-NG and Sentinel-5 measurements could help to improve the knowledge of the atmospheric composition in this region of the atmosphere. The Complete Data Fusion technique will be used to study the feasibility and improvements coming from the exploitation of this synergy.
Aerosol particles are a key player when it comes to Earth’s climate change (Boucher et al., 2013; Kaufman et al., 2002). They can directly interact with incoming solar radiation through scattering and absorption while they can act as highly effective cloud condensation or ice nuclei, altering cloud properties and lifetime and thus indirectly affecting the radiative equilibrium. Aerosols are also a critical component of air pollution. Aerosol at the Earth’s surface, typically referred to as particulate matter, has profound impacts on human health primarily through inhalation. To better quantify the role of aerosols in the aforementioned processes, accurate time-resolved observations of the aerosol microphysical properties at a global scale are of paramount importance.
Remote sensing has been widely exploited to obtain a global insight on aerosol optical and microphysical properties, and, in part, on its chemical composition. These particle properties are associated with their sources, and they also drive their interactions with clouds and radiation while they could also be used for identifying probably source types humans are exposed to. In order to better understand the aerosol role in these processes, we need to better characterize their temporal and spatial distribution. Currently, multi-angular, multi-spectral polarimetry is amongst the most promising aerosol remote sensing approaches. A Multi-Angle Polarimeter (MAP) measures the radiance and degree of polarization of light, at multiple viewing angles and spectral bands. Multi-angle polarimetry can provide enough information to accurately retrieve aerosol properties such as size distribution, particle shape and refractive index (Dubovik et al., 2019).
Herein, we derive the aerosol optical and microphysical properties using both a ground-based and an airborne version of a Compact MAP instrument (C-MAP). The C-MAP is currently being developed by Thales Alenia Space-UK, in collaboration with the University of Leicester. The C-MAP design builds upon the heritage of the upcoming MAP sensor on-board the CO2M mission (Sierk et al., 2021; Spilling et al., 2021), also developed by TAS-UK. The instrument will provide radiance (I) and degree of linear polarization (DoLP) in 7 measurement wavelength bands (410, 443, 490, 555, 670, 753 and 865nm), and 5 different viewing angles (0, ±15 and ±40°).
For deriving the optical and microphysical properties of the aerosols, we use the Generalized Retrieval of Atmosphere and Surface Properties (GRASP) algorithm. GRASP is a highly versatile algorithm, developed in such a way that it can facilitate the aerosol and surface property retrievals from various passive or active, space-borne, aircraft or ground-based remote sensing sensors (Dubovik et al., 2011).
We use simulated cases, with synthetic scenes of different aerosol content (shape, size and composition), solar zenith angle and surface albedo. The aerosol particle size distribution and complex refractive index m of the particles, are derived from the AERONET climatology for different aerosol species (e.g. “Dust”, “Maritime”, “Urban Pollution”, “Biomass Burning”; Dubovik et al., 2002). The fine mode contains only spherical particles, while the coarse mode contains a mixture of spheres and spheroids (Dubovik et al., 2006).
We present the C-MAP retrieval sensitivity to solar zenith angle, instrument viewing angle, aerosol content and surface parameters.
The C-MAP will perform ground-based and airborne measurements, during the demonstrator flights in the UK in late 2022. The latter will help to address technological challenges and support the retrieval algorithm development and testing. C-MAP is envisaged to provide enhanced capability for aerosol retrievals, in synergy with remote sensing sensors on board the same mission, or even as a stand-alone sensor. Being a compact polarimeter, C-MAP will potentially be a strong candidate to be deployed on smaller space platforms (cube or microsatellites) for localised aerosol retrievals (i.e. over cities).
Groundwater is an essential resource for irrigation in arid and semi-arid areas. Its monitoring is traditionally achieved with networks of localized observation wells. The point is that the current monitoring networks are generally sparse whereas groundwater levels tend to decline in many irrigated areas in the world.
Since 2002, the GRACE (Gravity Recovery and Climate Experiment) and GRACE-FO missions have provided monthly anomalies of total water storage (TWS). These data are very relevant to study the evolution of groundwater stocks at global and regional scale given that the surface stocks can be estimated and subtracted. However, the use of GRACE data for irrigation groundwater management is limited by its coarse resolution (≈ 400 km). The last decade has thus seen numerous attempts to downscale GRACE TWS data and to produce time series of groundwater storage (GWS) data distributed at higher (typically several tens of km) resolution. Downscaling methods are based either on the assimilation of GRACE TWS data into distributed hydrological models (physically-based downscaling) or on the use of statistical relationships between GRACE TWS and ancillary data available at higher resolution from remote sensing and/or model outputs (empirically-based downscaling).
Here, the validation strategies of existing downscaling methods of GRACE data, and more specifically of the GWS data derived from GRACE TWS data by removing surface water stocks, are reviewed. The downscaled GRACE-derived GWS data is usually evaluated in time against time series of localized in situ measurements. From the literature, a given downscaling method is considered efficient if validation metrics (Pearson correlation coefficient R, coefficient of determination R², root mean squared error RMSE) fall in an acceptable range, or if the downscaled product restitutes long-term trends consistent with the in situ reference estimates. Yet such a validation approach is insufficient to fully assess the usefulness of the downscaling method as it suffers from a lack of (i) appropriate validation of the spatial distribution of the downscaled GRACE-derived GWS within the GRACE pixel and (ii) comparison with the results that would be obtained without downscaling (by directly using GRACE TWS at the fine scale).
To fill the gap, a new validation framework is proposed for evaluating downscaling methods of GRACE data. The proposed framework mainly differs from previous approaches in that (i) it defines a downscaling gain to assess the improvement provided by the downscaling method at the fine scale and (ii) it evaluates this improvement in both time (agreement between time series) and space (agreement between spatial distributions at a given time within the GRACE pixel). The downscaling gain is defined from the relative difference between the performance metrics computed at the fine scale against in situ data and obtained from the downscaled data and from the original GRACE data separately.
We tested this new validation framework by comparing it with the classic validation approach using GRACE TWS data over a 113 000 km² fractured granitic aquifer in South India. In our study case, GRACE data are provided from the JPL (Jet Propulsion Laboratory) mascon solution (RL06M), the downscaling resolution is 0.5° (≈ 50 km) and the downscaling method is empirically based on multi-sensor remote sensing data re-sampled at the 50 km resolution.
The time series of high-resolution GRACE-derived GWS and in situ data showed a good fit on all high-resolution pixels (R > 0.6). Yet, the temporal downscaling gains of R on high-resolution pixels ranged from -31% to 17% with an average of -2%, and the spatial gains of R on the time series ranged from -42% to 66%, with an average of +9%. This shows that the downscaling product was not significantly better at the 50 km resolution than the original (without downscaling) GRACE product. This result shows the need for a comprehensive validation strategy against a null hypothesis, including both temporal and spatial aspects as originally proposed in this study, to fully determine the quality of downscaled GRACE-derived GWS products.
Synthetic Aperture Radar (SAR) can measure the relative displacement between the satellite antenna and the ground surface in two ways: a) using coherent phase-based interferometry or b) using speckle tracking via estimating the relative shift between image pairs referred to as offsets. The latter is usually based on the cross-correlation of complex or intensity images. Although with relatively lower spatial resolution and precision, speckle tracking has some advantages compared to interferometry for displacement mapping. First, speckle tracking could measure the along-track displacement, enabling 3D mapping with the combination of ascending and descending orbits. Second, offsets do not require phase unwrapping, which is computationally expensive and error prone. Third, offsets are spatially absolute measure, without the need of a reference point if biases are carefully corrected for, making it easy to mosaic and desirable for large scale processing in the big data era.
Speckle tracking has traditionally been used to map large amplitude surface deformation processes, such as fast-moving ice shelves, ice streams and landslides, or large amplitude transient displacement events such as earthquakes. With the advance of the GPU-based algorithm and the availability of long time-series of SAR images from Sentinel-1, it is of interest to evaluate the potential of speckle tracking to measure slow surface deformation such as tectonic and volcanic processes.
The uncertainty of ground displacements estimated from speckle tracking is governed by the uncertainty of the offset estimation and other noise sources including atmospheric propagation delays, ground motions from tidal and loading processes and SAR processing effects. We correct for ionosphere using the GNSS-based TEC products from various sources of Global Ionospheric Maps, for troposphere using ERA5, for solid Earth tides following IERS conventions and for SAR processing effects using the Extended Timing Annotation Dataset (ETAD) for Sentinel-1. The uncertainty of SAR offset time-series and velocity is estimated and propagated through a linear observation system.
Results from the range offset time-series of Sentinel-1 over southern California shows the standard deviation of the linear velocity reduces from 5 mm/year to 2 mm/year after applying the corrections. The velocity field across the southern San Andreas Fault system shows 15 mm/year ground displacement in LOS direction which is compatible with the total expected plate motion across the plate boundary and consistent with the independent GNSS measurements with an RMSE of 6 mm/year.
Improving and homogenizing time and space references on Earth and in space with an accuracy of 1 mm and long-term stability of 0.1 mm/yr is relevant for many scientific and societal endeavors, such as quantifying sea-level change. Reaching this goal is only possible by referencing the various geodetic sensors to one another on a unique well-calibrated platform. For instance, GNSS satellites require the Earth rotation angle from VLBI as an input parameter. Alternatively, an onboard VLBI transmitter (VT) on the satellites creates a space-tie between the VLBI and GNSS methods and can allow transferring the UT1-UTC information by jointly observing quasars and the VT. Previous work showed the feasibility of such a system where the UT1-UTC precisions from VLBI are assumed to be 20 and 40 microseconds for long and short baselines, respectively. In this study, Monte-Carlo simulations are carried out to obtain more realistic UT1-UTC precisions from VLBI Intensive sessions at four baselines, which are then transferred to the GNSS satellites. The quality of the transfer is assessed.
The mean sea surface has an important role both in the calculation of the mean dynamic topography and in the area of sea level change as a reference surface. It is typically obtained via spatial and temporal averaging of altimetric sea surface height measurements.
A key point for estimating a suitable mean sea surface model is the mitigation of the temporal variability of the ocean surface, especially when missions with a long repeat cycle are involved.
This contribution presents a new approach to estimate a continuous spatio-temporal mean sea surface and its temporal variability from along-track altimetric sea surface height measurements.
A parametric function continuously defined in the spatial as well as temporal domain is constructed from a $C^1$-smooth finite element space to represent the mean sea surface.
The finite elements are defined on triangulations with different target edge length and, thus, different spatial resolution.
Least-squares observation equations are set up, to estimate the unknown scaling coefficients from the sea surface height measurements as collected by altimetric exact repeat missions and geodetic missions.
Two advantages of the proposed method are that the surface is represented by an analytic model and that the static sea surface and its temporal variability is estimated simultaneously. Here a static component of the function represents the mean sea surface and the temporal component is used to absorb the ocean variability and represents a continuous model of the sea surface anomaly. To model this temporal variability different temporal base functions like linear trend and harmonic functions with different periods or B-Splines are analyzed.
Because of unflattering spatio-temporal observation distribution in coastal areas and near the area boundary a regularization technique is introduced to stabilize the solution and to avoid numerical problems.
When modeling the temporal variability with B-Spline functions in the time domain, additional constraints (e.g. zero mean) need to be applied to successfully separate the static and temporal signal of the altimetric sea surface heights observations and to avoid an oscillation at the temporal boundaries.
Within a proof-of-concept study 10 years of satellite altimetry from CryoSat--2, Jason 1--3, Envisat, SARAL, HY-2A and Sentinel 3A and 3B over the period 2010 to 2019 are used and analyzed in a local study region with different spatial and temporal resolutions.
Besides the static mean sea surface, the temporal component which is estimated covers the temporal variability of the sea surface using cardinal B-Splines.
The comparison of the static component to the global CNES_CLS15 MSS shows a reasonable agreement with a root mean square error below 4 cm and a root mean square of the residuals of under 5 cm in the region of South Atlantic and Indian Ocean below South Africa.
Comparisons of the temporal component with gridded sea level anomaly product DUACS SLA DT2018 show a good agreement in areas of low ocean variability (RMS below 4 cm). In regions with a higher temporal variability the RMS is higher, but still below 10 cm. This highlights that in regions of large ocean variability the temporal basis function chosen in this study are a good choice, but spatial as well as temporal resolution need further investigations.
In general, it is demonstrated that the proposed approach can be an alternative to the well established mean sea surface and sea level anomaly estimation procedures.
The Copernicus Precise Orbit Determination (CPOD) Service delivers, as part of the Ground Segment of the Copernicus Sentinel-1, -2, -3, and -6 missions, orbital products and auxiliary data files for their use in the corresponding Payload Data Ground Segment (PDGS) processing chains and by external users through the ESA Copernicus Open Access Hub.
The CPOD Service has been operational since 2014, after the launch of Sentinel-1A. Currently, after more than seven years, the CPOD Service generates routinely precise orbital products of seven Copernicus Sentinel satellites, using state-of-the art models which have been updated several times over the past years. The precise orbital products have different accuracy and latency requirements varying for the different missions. Prediction, near real-time, short-time critical or non-time critical orbital products are delivered for the satellites.
The CPOD Service is supported by the CPOD Quality Working Group (QWG), composed by leading experts on GNSS and Low Earth Orbit (LEO) POD. Independent orbit solutions are provided from these members to support quarterly and yearly Regular Service Reviews. These reviews guarantee a continuous and independent quality control of the orbital products generated operationally by the CPOD Service. In addition, the CPOD QWG regularly meets to discuss recent developments and enhancements in the field of LEO POD and the applicability to the service operations.
The CPOD Service has also evolved over the years to support new user needs and technologies, for instance:
1. The provision of a web-based tool to monitor the operations and performance of the service.
2. The reduction of the latency for near-real time products for Sentinel-3 and Sentinel-6 from 30 to 10 minutes.
3. The migration of the Sentinel-3 NRT orbit generation from the PDGS to the CPOD Service Centre.
4. New technologies based on HTTPS API-REST are required for the interface with the PDGS.
5. Readiness to include new Sentinel-C & -D units, Sentinel-6B as well as Copernicus Expansion missions.
An overview of the current status of the CPOD Service is presented in terms of organisation, design, operations and performance supporting the four Copernicus missions and seven satellites.
The political borders of Iran encompass one of the most tectonically active regions in the world. Part of the larger Alpine-Himalayan orogenic belt, convergence between the Arabian and Eurasian plates is driving active deformation and seismicity throughout the Zagros Mountains, the Alborz, the Kopeh Dagh, and the Makran subduction zone. Accurate geodetic estimates of ground-surface velocities and strain rates are critical to our understanding of both the localised seismic hazard, and the distribution and mechanics of deformation throughout the country. Previous geodetic estimates from regional GNSS observations are limited by sparse station coverage, while InSAR-derived velocity fields have focused on subregions over major crustal structures due to the computational cost of processing the data.
Here, we present ground-surface velocities and strain rates for a 2,000,000 km2 region of Iran, with a focus on the Zagros Mountains, derived from the joint inversion of 6 years of Sentinel-1 InSAR-derived ground-surface velocities and GNSS data. This is made possible by the COMET-LiCSAR InSAR processing system, which we use to generate short baseline networks of Sentinel-1 interferograms for eight ascending and eight descending tracks. We correct for tropospheric noise using the GACOS system, which combines ECMWF weather models and the 90 m SRTM digital elevation model to mitigate both the stratified and turbulent signals of tropospheric delay. We estimate average velocities using LiCSBAS, an open-source software package for performing small-baseline time-series analysis. By constraining North-South motion with GNSS velocities, we are able to decompose our line-of-sight velocities into East and Vertical components.
Our velocity fields highlight a range of tectonic and anthropogenic signals. We focus on major fault slip rates and large scale deformation, along with coseismic displacements, postseismic deformation, and subsiding basins. To further investigate deformation throughout Iran, we derive surface strain rates from our velocities and investigate regions of rapidly-accumulating and localised strain. Finally, we discuss the challenges of generating large-scale InSAR velocity and strain rate fields from Sentinel-1 data.
Monthly gravity field solutions (Level-2 data) based on observations acquired by the GRACE and GRACE-FO missions are provided by various analysis centers (ACs). Here we assess differences between these solution series through the example of mass changes of the ice sheets in Greenland (GIS) and Antarctica (AIS). Our study focuses on the releases AIUB RL02, GFZ RL06, ITSG-Grace2018, CSR RL06, JPL RL06 and an unconstrained variant of GRGS RL05. In addition, we also make use of the COST-G RL01 series, which is a consolidated solution series derived from the combination of individual gravity field models provided by the aforementioned ACs. COST-G, the International Combination Service for Time-variable Gravity Fields is a product center of IAG’s International Gravity Field Service. The results presented here are an outcome of COST-G’s Product Evaluation Group, which assesses the combined gravity field series as well as the series provided by the ACs regarding their suitability for studying mass changes in the Earth’s subsystems (e.g. oceans, cryosphere, continental hydrosphere).
Based on residual variations of the spherical harmonic (SH) coefficients with respect to a long-term and seasonal model, we quantify the noise level of the latest GRACE/GRACE-FO solutions series provided by COST-G and the contributing ACs, for the common period from March 2003 through April 2021. This assessment is performed both in the SH domain and in the space domain, focusing on the polar regions. A regional integration approach using tailored sensitivity kernels is applied to derive mass change time series for individual ice sheet regions and the entire GIS and AIS. The tailored sensitivity kernel approach was developed in the framework of ESA’ Climate Change Initiative (CCI) in the Antarctic Ice Sheet CCI project and the Greenland Ice Sheet CCI project. A measure for the noise level of the different mass change time series is inferred from the residuals with respect to a climatology, corrected for remaining inter-annual mass changes. We find that across all considered regions, the noise levels may differ by up to 40% between the different utilized releases. Moreover, we show that mass change products for GIS and AIS benefit from the combination of different solution series. We also quantify the signal content inherent to the individual mass change time series in terms of the seasonal signal and the linear trend (i.e. mass balance). While mass balance estimates for GIS agree very well (within 2% of the mean mass balance), a clearly larger scatter was found for AIS. In this case, the scatter around the mean mass balance can be as large as 21% when considering the GRACE period only. These deviations decreased to 11% for the GRACE/GRACE-FO period. The differences revealed in the AIS signal content between the releases are further investigated with respect to contributions from different parts of the SH spectrum. In addition to selected SH coefficients (e.g. C21, S21), we systematically attribute the differences to each SH degree and the corresponding SH orders. We show that the largest contribution arises from low-degree zonal coefficients, such as C30 and C40.
The South Island of New Zealand represents a tectonically complex region, as the plate boundary between the Australian and Pacific plates in this region changes from the subduction of the Pacific plate at the Hikurangi Margin under North Island, to subduction of the Australian plate off the south coast of South Island in the Puysegur Trench. This transition manifests itself on South Island as the Marlborough Fault Zone, a region of strike-slip faulting in the north that resulted in the 2016 Mw 7.8 Kaikoura Earthquake, and the formation of the oblique strike-slip Alpine Fault.
The consistent 055° strike of the surface trace of the Alpine Fault for most of it’s 450 km length hides complexities in its subsurface structure, where there are along strike variations in both fault dip and locking depth. Variations in both of these parameters can be expected to manifest itself as variations in ground surface motion. Almost 20 years of GNSS campaigns went into producing a horizontal velocity field of New Zealand, with an average 10-20 km spacing between GNSS stations, allowing the comprehensive view of horizontal motions. In order to measure both horizontal and vertical rates at high resolution, however, we use > 6 years worth of ascending and descending Sentinel-1 InSAR data.
We initially focus on a region of central South Island around Aoraki/Mt Cook and the Glacier Country, covered by 2 ascending (023A and 125A) and 2 descending (044D and 146D) tracks. This is the region of maximum topography, with the Southern Alps mountain range having been formed due to uplift and exhumation of Pacific Plate crust along the Alpine Fault. Geological dip-slip measurements have indicated that the maximum dip-slip rates (>12 mm/yr) along the fault are found in this region, decaying both north and south along strike. A combined inversion of the LOS velocity fields from each of the 4 tracks in this region, using the north component of the GNSS velocity field, support this, with maximum uplift rates of ~12.5 mm/yr focused around the Aoraki/Mt Cook region, with the peak uplift rates decreasing both with increasing distance from the fault, to < 5 mm/yr along strike.
By comparing fault parallel, fault perpendicular, and uplift rate profiles for a region through the peak uplift zone, and a region to the north and south of it, we show that a single fault geometry cannot be fit that recreates all these profiles. Rather, along strike variations in the fault geometry are required to explain the measured ground motions.
Finally, we take advantage of the possibilities offered by automated InSAR processing system of the COMET LiCSAR system to extend our velocity map to the include all 8 tracks over South Island, to 3-component velocity and strain-rate maps over the entire island.
In Denmark, the Agency for Data Supply and Efficiency (SDFE, i.e. the national mapping agency) has the governmental responsibility for maintaining and developing the national, geodetic infrastructure.
With the launch of the European Copernicus satellite Sentinel-1 and the European Commission’s free and open data policy, a number of new opportunities have arisen for lifting our governmental responsibility using satellite data.
As such, SDFE have initiated efforts towards using time series from our national network of continuously operating GNSS stations to datum reference Sentinel-1-based ground motion data. That will refer the inherently relative ground motion measurements to the “absolute” geodetic reference frame realized by the GNSS infrastructure.
An essential step in this process is the co-location of Artificial Reflectors (ARs) such as Compact Active Transponders (CATs) or Corner Reflectors (CRs) with the GNSS stations to make them visible in the Sentinel-1 imagery. CATs pose a new means for such a co-location since they can be mounted directly on the foundation of, e.g., a GNSS station without disturbing the reception of the GNSS signals. This potential has led SDFE to conduct a validation exercise to investigate, among others, the long-term stability of the CATs.
The validation exercise is the focus point of the given presentation and involves the comparison of ground deformation measurements from Sentinel-1 imagery and precision leveling, as well as the assessment of a temperature- and instrument-dependent bias in the CATs.
The physical set-up of the validation exercise consists of one CR and three CATs, located on a near E-W line with a ~60 m spacing. The CR is a double back-flipped square trihedral with a side length of 65 cm, and the CATs are produced by MetaSensing. The CR as well as two CATs are mounted on torsional plugs and the third CAT directly on the foundation of the GNSS station (HABY).
During the period July 1st 2020 – Sep. 1st 2021, we performed three manual displacements to the CATs on the torsional plugs. The displacements were applied on Sep. 15th 2020, February 24th 2021 and July 13th 2021, respectively, and ranged from 1.1 – 14.8 mm. In addition to performing precision leveling every two months, we leveled before and after each manual displacement.
The Sentinel-1 imagery was processed using the Persistent Scatterer Interferometry technique (PSI) available in the SARPROZ software [https://www.sarproz.com/]. We used imagery from the tracks 44A, 146A, 66D and 168D. The leveling data was processed using a least-squares adjustment available in GNU GAMA [https://www.gnu.org/software/gama/]. The CR was used as a reference point in both cases.
We then compared the displacements measured from leveling with those from (a) Line-of-Sight velocities for each satellite track projected to the vertical, and (b) the 2D decomposition obtained using all available tracks. That allowed for assessing whether the CATs could be used for capturing both the timing and magnitude of the applied displacements.
The comparison was complicated by a number of factors:
• CATs being more sensitive to variations in the satellite look angle than the CR. That affects the amplitude and hence the basis for using the instruments for ground deformation monitoring.
• The CATs subject to manual displacements experienced problems leading to data gaps in the satellite-based time series. The amplitude of one CAT started decreasing in August 2020 after which the instrument stopped working. It was replaced in January 2021. The instrument failed again in July 2021 and was replaced in September, after completion of the validation exercise. The other CAT experienced inexplicable, periodic fallouts in December 2020 and August 2021.
• The identification of a temperature- and instrument-dependent bias in the CATs, consistent with findings in other studies [Czikhardt et al., 2021]. This was found using temperature data from a nearby weather station.
We found that the CATs could indeed capture the timing and magnitude of the manual displacements. Under optimal conditions, the latter could be achieved with an accuracy of approximately 2 mm. The highest correlation was obtained for instruments located in the near range of the satellite track with lower correlations for locations in the far range. In addition, the validation exercise was affected by periodic instrument failures sometimes related to water entering the instruments.
In conclusion, the reliability of the MetaSensing CATs needs to be improved but if this happens, there is indeed a potential for using the CATs for long-term deformation monitoring and fulfilling governmental responsibilities.
References:
• R. Czikhardt, H. van der Merel, R. F. Hanssen and J. Papco, ”Multi-year field test of compact radar transponders for InSAR geodesy”, ESA Fringe, 2021
• R. Czikhardt, H. van der Merel, J. Papco and R. F. Hanssen, ”On the efficacy of compact radar transponders for InSAR geodesy: Results of multi-year field tests”, Submitted to IEEE Transactions on Geoscience and Remote Sensing, 2021
The large-scale mass transport processes have been observed by the Swarm satellites since the end of 2013. The collected GPS data allows for monthly Earth gravity field models with an estimated spatial resolution of 1500km (Spherical Harmonic degree 12-13). Our team is spread over numerous institutes, namely the Astronomical Institute of the University of Bern, the Astronomical Institute of the Czech Academy of Sciences, the Delft University of Technology, the Institute of Geodesy of the Graz University of Technology, and the School of Earth Sciences of the Ohio State University. The European Space Agency and the International Combination Service for Time-variable Gravity Fields (COST-G) have supported the routine production of the Swarm monthly models, which are published on a quarterly basis at ESA’s Swarm Data Access server (https://swarm-diss.eo.esa.int) as well at the International Centre for Global Earth Models (http://icgem.gfz-potsdam.de/series/02_COST-G/Swarm). We produce gravity field models that are independent of any other source of gravimetric data and do not consider any temporal and spatial correlations to regularize the solutions. Each institute adopts different gravity inversion strategies, and their individual models are combined at the solution level into a final combined model, with weights derived from Variance Component Estimation.
Our models traditionally agree with a reference parametric model derived from GRACE/GRACE-FO data at the level of roughly 4 cm Eq. Water Height, but improvements in the processing of the kinematic orbits have lowered this figure to 3cm since early 2020. We have found that the ocean areas are ~30-50% noisier than land areas, with reason still unknown. The time series of large water storage basins agree with dedicated gravimetric data at the spatial resolution of Swarm, the trends are within 1 cm/year and temporal correlation at 0.75. These models provide global gravimetric observations across the gap between GRACE and GRACE-FO, as well as the intermittent short gaps therein.
Precise accelerometry is a key technique for satellite geodesy. Well understood and well calibrated accelerometer data products from current and future geodetic satellite missions are essential for the determination of gravity variations, for the quantification and monitoring of large-scale mass redistribution in the Earth system, and for providing essential climate variables, in particular related to the Earth's water cycle. Accelerometer measurements are also important for orbit determination, and for the study of space weather phenomena. In our contribution, we address challenges in electrostatic accelerometer data of recent geodetic missions, e.g., from the double accelerometer pair onboard CNES' Microscope mission, from the four accelerometer onboard the GRACE FO and GRACE missions, and from the three accelerometers of ESA's Swarm Earth explorer mission. Issues include the understanding and quantification of coupling and correct partitioning between the sensor axis signals, effects related to radiation and test mass charge, and the understanding of transient effects such as thruster acceleration signals and environmental spike patterns with various distributions. We propose a dedicated effort to study such complex effects. Signal contributions and anomalies should be revisited and analyzed in a comprehensive approach. We discuss consequences for the NGGM/MAGIC gravimetric mission constellation, for future geodetic mission concepts in general, and for the processing of spaceborne accelerometer data.
The central hypothesis of the Research Unit (RU) NEROGRAV (New Refined Observations of Climate Change from Spaceborne Gravity Missions), funded for 3 years by the German Research Foundation DFG, reads: only by concurrently improving and better understanding of sensor data, background models, and processing strategies of satellite gravimetry, the resolution, accuracy, and long-term consistency of mass transport series from satellite gravimetry can be significantly increased; and only in that case the potential of future technological sensor developments can be fully exploited. Two of the individual projects (IPs) within the RU closely interact on optimized space-time parameterization and stochastic modeling regarding instrument data and background models for GRACE and GRACE-FO gravity field determination. Based on recent developments within NEROGRAV we also work on a future GFZ GRACE/GRACE-FO Level-2 data release.
This presentation provides an overview of the main outcomes of the advanced processing strategies focusing on combined effects. We will discuss details on stochastic modeling of accelerometer and inter-satellite ranging observations and of ocean tide and non-tidal atmospheric-oceanic background models. Furthermore, we address a data driven approach for optimized parameterization reducing non-tidal temporal aliasing error effects. We present Level-2 results based on the monthly solutions within the three test years 2007, 2014 and 2019 in the spectral and spatial domain in comparison with the standard GFZ GRACE/GRACE-FO RL06 time series.
Using Interferometric Synthetic Aperture Radar (InSAR) data to observe the coseismic deformation on the Earth’s surface is now an established method in earthquake studies. However, the majority of earthquakes measured with InSAR are shallow events (depth < 30 km) whose surface displacement signals are relatively easy to capture, even for small magnitude (Mw ~5.0) when very shallow. Conversely large intermediate-depth (Mw > 6.5, 70-300 km depth) earthquakes, which are usually located in subduction zones, are rarely the focus of geodetic work, due to the efforts required to establish if a ground deformation signal can be robustly observed. Here we present a case study of a Mw 6.8 earthquake with a 112 km centroid depth which occurred on 03 Jun 2020 in Chile. We perform 3 years of Sentinel-1 InSAR time series analysis (spanning Jan 2018 to Apr 2021) over the potential deformation area to better resolve the coseismic deformation that may otherwise be masked by atmospheric noise in single interferogram. After masking the pixels which contain unwrapping errors or show a high fading signal bias (> 3 mm/year), we successfully observe this deep earthquake (with peak displacements of ~10 mm) and reconstruct its coseismic deformation field, with the validation from available Global Navigation Satellite System (GNSS) data. In addition to the main coseismic signal from the earthquake, we find a variety of changes in surface displacement behavior inferred to relate to the earthquake, including on the Atacama salt flat, a large desert plain (~1000 km2) located near the southeast of the epicentre. Instead of abrupt coseismic deformation, the whole salt flat region with clear boundaries shows rapid linear velocity acceleration after the earthquake. Our work demonstrates that the significant surface displacements caused by large intermediate-depth earthquakes in subduction zone are observable, and shows the capability of InSAR for tracking these small magnitude deformation signals with sufficient data.
The mass variations of for instance the Greenland and Antarctic regions in the last two decades impressively prove the impact of climate change. In order to monitor the temporal behaviour, the determination of the Earth's gravity field is of great importance. To bridge possible gaps occurring within dedicated gravity field missions such as CHAMP, GRACE and GOCE or between consecutive gravity field missions (i.e., GRACE and GRACE Follow-On), additional concepts are eminently welcome for gravity field recovery. Here, gravity field solutions based on kinematic satellite orbits, for example of the European Space Agency (ESA) mission Swarm, or the ESA Sentinel programme are a common practice.
For kinematic orbit determination we use an in-house developed approach based on an iterative least-squares adjustment utilizing raw GNSS observations. Previously we used GNSS products provided by the Center for Orbit Determination (CODE) in Europe. However, since varying algorithms, models, and processing strategies applied by different institutions can lead to inconsistencies, which affects the performance. Thus, we now use GNSS products which are consistently processed with our in-house software package GROOPS. This consistency improves the kinematic orbit results and subsequently the gravity field determination. Until now, we reprocessed the kinematic orbits of 19 low earth orbiting satellite missions, including non-gravity missions like Sentinel 1A/B, Sentinel 3A/B, Swarm, TerraSAR-X, TanDEM-X, MetOp A and B, and Jason 1,2 and 3. These kinematic orbit solutions are subsequently published on our website together with satellite orbit products like the reduced dynamic orbit, the attitude, and the accelerations due to non-conservative forces.
Based on these kinematic orbits a time series of individual monthly gravity field solutions has been determined. Additional to the more precise orbits, enhanced non-gravitational force models based on satellite macro models contribute to more accurate gravity field solutions.
The time series spans nearly 20 years without gaps, starting in January 2002. We will present the mass variations over this time span for regions like the Antarctic, Greenland, the Amazon basin, and other larger river basins.
In this work we exploit a spatially extensive geodetic dataset to study both onshore and offshore soil deformations affecting a region of the Upper Adriatic Sea (Italy) coastal areas during the last two decades. The study area is located in the proximity of the city of Ravenna and is subjected to a general subsidence related to the concurrent effects of several natural phenomena and anthropogenic activities such as soil compaction, groundwater pumping and hydrocarbons extraction. Our dataset is composed by i) Synthetic Aperture Radar (SAR) images provided by Envisat from 2003 to 2010, Cosmo-SkyMed from 2011 to 2017 and Sentinel-1 from 2015 to 2018 missions, ii) Global Navigation Satellite System (GNSS) measurements from continuous stations collected by public and private authorities, and iii) levelling surveys performed in several years by private companies. We develop a simple but effective procedure to cross-validate all the sources of information maximizing the advantages of each technique and exploiting the independence of all the available geodetic data. A common local reference system centered in the city of Ravenna has been used for all the geodetic dataset. Such choice is due to the high SAR coherence experienced by the urban area and the proximity to the Area Of Interest (AOI) alng the coastline in order to minimize any possible tectonic contribution and to avoid unwrapping artefacts during the SAR Interferometry (InSAR) processing. The cross-validation procedure shows an excellent agreement among InSAR, GNSS and levelling data; all geodetic techniques detect a local deformation peak of about -1/-1.5 cm/yr along the coastline, close to Lido di Dante, and on the offshore hydrocarbon production platform. Moreover, the deformation pattern is well characterized both in spatial extent and in time showing a clear deceleration, thus allowing to provide a qualitative interpretation of the phenomenon based on ancillary data available in the study area.
The output of the cross-validation procedure provides a reliable and robust assessment of the ongoing deformation supporting modelling studies for risk mitigation purposes in both inland and shoreline areas.
The past years have seen a rapid development in the field of space geodetic research including highly successful satellites such as ESA's GOCE mission along with the GRACE and GRACE-FO misisons. Cryosat-2 is amongst the most important satellite when it comes to deriving the high resolution gravity field of the Earth and particularly for the Polar regions. Where the GOCE and GRACE satellittes only maps a fraction of the total gravity field variation the ESA Cryosat-2 maps even tiny variations in the marine gravity field and has revolutionized our knowledge about Polar geodesy and geophysics
In this presentation we highlight the importance of especially Cryosat-2 and other altimetric geodetic satellites like Jason-2 and Saral/AltiKa and former satellites like ERS and Geosat.
The newest global marine free-air gravity field (DTU21GRA) based on GRACE+GOCE geoid signal and Satellite altimetry is presented and evaluated with the focus on the Polar regions.
A new processing chain with updated editing and data filtering has been implemented. The filtering implies, that the 300 meter or 20Hz sea surface height data are filtered using the Parks-McClellan filter to derive 2 Hz data or data points at each 3 km. This has a clear advantage over the 1 Hz boxcar filter in not introducing side-lobes degrading the MSS in the 10-40 km wavelength band and for the first time enables resolution of marine gravity down to around 10 km at the 1-2 km accuracy. A major new advance leading up to the release of this Gravity field is the use of an improved 10 years Cryosat-2 LRM+SAR+SARin record including retracked altimetry in Polar regions using the SAMOSA+ physical retracker via the ESA GPOD facility.
We will also demonstrate the use of high resolution marine gravity for the regional scale geological investigations like a recent study of the Cretaceous ocean formation in the Arctic.
Investigations and comparison in the Arctic marine regions indicate, that the quality of the altimetric derived marine gravity field is superior to previous marine gravity fields holdings from marine vessels. These marine data being the fundament of the Earth Geopotential Models EGM2008 in the Arctic Ocean.
Initially, a new geodetic mean dynamic topography model DTU22MDT is derived using the new DTU21MSS mean sea surface. The DTU21MSS model has been derived by including re-tracked CRYOSAT-2 altimetry also, hence, increasing its resolution. Some issues in the Polar regions have been solved too. The geoid model was derived within the ESA supported Optimal Geoid for Modelling Ocean Circulation (OGMOC) project. It was based on the GOCO05C setup, though the newer DTU15GRA altimetric surface gravity was used in the combination. The OGMOC geoid model was optimized to avoid striations and orange skin like features. Subsequently the model had been augmented using the EIGEN-6C4 coefficients to d/o 2160.
The processing scheme used for deriving the new geodetic MDT is similar to the one used for the previous geodetic DTU MDT models. The filtering was re-evaluated by adjusting the quasi-gaussian filter width to optimize the fit to drifter velocities. Subsequently, the drifter velocities are integrated to enhance the resolution of the MDT model. Weights and constraints are introduced in the inversion and tuned to obtain a smooth model with enhanced details. A special concern is devoted to the coastal areas to optimize the extrapolation towards the coast line. The presentation will focus on the coastal zone when assessing the methodology, the data and the final model DTUUH22MDT.
The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. Without any doubt the development of the GOCE user toolbox have played a major role in paving the way to successful use of the GOCE data for oceanography.
The GUT version 2.2 was released in April 2014 and beside some bug-fixes it adds the capability for the computation of Simple Bouguer Anomaly (Solid-Earth). During 2019 a new GUT version 3 was released. GUTv3 was further developed through a collaborative effort where the scientific communities participate aiming on an implementation of remaining functionalities facilitating a wider span of research in the fields of Geodesy, Oceanography and Solid earth studies. Accordingly, the GUT version 3 has:
- An attractive and easy to use Graphic User Interface (GUI) for the toolbox,
- Enhance the toolbox with some further software functionalities such as to facilitate the use of gradients, anisotropic diffusive filtering and computation of Bouguer and isostatic gravity anomalies.
- An associated GUT VCM tool for analysing the GOCE variance covariance matrices.
The information extracted from Time-Series Interferometric Synthetic Aperture Radar (TInSAR) nowadays is routinely used for studying of the earth surface dynamics of different deformation mechanisms. The increasing use of TInSAR-derived products (provided particularly by free availability of ESA Copernicus SAR data) induces a necessity of proper and standard quality control methods to assess the precision and accuracy of the InSAR-based products. Despite many studies and developments regarding such quality description in terms of precision and noise structure, the quantification of the TInSAR uncertainties (or biases) induced by phase unwrapping errors has been remarkably overlooked so far. Although some initial efforts have been made (either for some limited methodologies and scenarios, or by extensive simulation algorithms), still there is no analytical criterion for assessment of such uncertainties.
It should be noted that the presence of unwrapping errors in TInSAR products is always probable. Particularly, in areas with high level of noise or with a peculiar deformation pattern, there is always a chance (even small) for unwrapping errors to be occurred. TInSAR algorithms usually try to somehow identify and mitigate the unwrapping errors either by a trial-and-error or by an experimental approach based on the skills of InSAR experts. Nevertheless, the performance of such heuristic methods is always case-study dependent. The main reason is that there are different factors, differing from case to case, which contribute to the success of the phase unwrapping. Examples of these factors are different spatio-temporal behavior of deformation mechanisms, different initial assumptions used in the phase unwrapping, different landscape characteristics, different processing settings, and so on. The impact of these factors on the correctness of the phase unwrapping needs to be assessed and delivered to the final users. In other words, there is a need for a quality-description approach capable of digesting the effect of these factors to quantify the probability of correct phase unwrapping or its success-rate.
In this study, we introduce a new analytical approach for quantification of InSAR uncertainties induced by phase unwrapping errors. The concept of the method is based on the quality description criteria (such as Success-Rate and Ambiguity Dilution of Precision) that are used in GNSS applications for describing the uncertainties of integer ambiguity resolution methods. It should be noted that these criteria have been already exploited in some TInSAR studies, however all the studies so far have been limited to relative phase unwrapping of pair of close-by pixels (called arc). Here, we extend this idea to spatio-temporal phase unwrapping in a network approach. The main challenge to address is how the quality (or success-rate) of individual arcs in a network of pixels should be propagated to the success-rate of the final estimated time series of all the pixels. By such propagation, both the noise characteristics and also the spatio-temporal network structure of the data are taken into account. At the end for each individual point, we estimate a success-rate indicator, which provides the probability of correct phase unwrapping for that point. This new indicator can be used together with the final TInSAR products and other quality measures to describe not only the precision of the data but also their accuracy.
It should be noted that the proposed approach is also flexible to quantify the phase unwrapping uncertainties induced by wrong initial assumptions about deformation mechanisms (Note that all the phase unwrapping methods require such assumptions about spatial or temporal behavior of deformation signals). The proposed approach provides a quantitative tool (called Biased-Success-Rate) to assess the effect of wrong deformation assumptions on the accuracy of TInSAR phase unwrapping. In this way, it can improve the falsifiability of the TInSAR products.
We validate the introduced method in a simulation manner for different scenarios. The results confirm that the method is capable to describe the probability of occurrence of unwrapping errors with sufficient correctness. Also the performance of the method is demonstrated for different real case studies, from small-scale applications (e.g., infrastructure monitoring) to large-scale studies (e.g., subsidence monitoring in urban and semi-urban areas).
The introduced quality indicator can be considered as the first quantitative/analytical measure of accuracy of TInSAR data in respect of unwrapping errors. By improving the InSAR quality description and its falsifiability, the proposed approach is a step forward in standardisation of TInSAR products and services.
Looking Into the Continents from Space with Synthetic Aperture Radar (LiCSAR) is a system built for large-scale interferometric (InSAR) processing of data from Sentinel-1 satellite system, developed within the Centre for Observation and Modelling of Earthquakes, Volcanoes and Tectonics (COMET). Utilising public data sources, and data and computing facilities at the Centre for Environmental Data Analysis (CEDA), LiCSAR automatically produces geocoded wrapped and unwrapped interferograms in combinations suitable for time series processing using Small Baselines (SB)-based InSAR techniques, such as NSBAS-based LiCSBAS open-source tool, for large regions globally. The processing can be prioritised following an earthquake or during volcanic crises through an Earthquake InSAR Data Provider (EIDP) subsystem where data are processed partially on a High Performance Computing facility, permitting rapid generation of a co-seismic interferogram, down to 1 hour following a new post-seismic Sentinel-1 acquisition becoming available.
The main LiCSAR products are generated from standard Sentinel-1 Interferometric Wide Swath (IWS) data in frame units where a standard frame is a merge of 13 IWS burst units per each IWS swath, covering approx. 250x250 km. The frame InSAR products and additional generated data (backscatter intensity images, tropospheric corrections by COMET GACOS service etc.), are distributed in a compressed GeoTIFF format at 0.001° resolution in the WGS-84 coordinate system, through the LiCSAR Portal (including EIDP and the Volcanic and Magmatic Deformation Portal that includes interactive time-series viewer for global volcanoes), European Plate Observing System (EPOS) and the CEDA Archive. The final products are open and freely accessible. As of December 2021, over 645,000 interferometric pairs have been generated by processing over 183,000 epochs from Sentinel-1 acquisitions for 1,789 frames, prioritising areas of the Alpine-Himalayan tectonic belt, the East African Rift, and global volcanoes. The dataset is increasing by ~6-7,000 epochs per month.
This contribution will present current selected processing results demonstrating capabilities and applications of the system for studying tectonic and volcanic deformations, and will report on up-to-date technical solutions implemented in both the LiCSAR system and related tools, such as LiCSBAS. We will also include some of the experimental global-scale outputs, such as average deformation velocity products and deformation measurements based on frame-level azimuth subpixel offsets to deliver N-S direction velocity of tectonic plates motion.
In this work, we exploited geological, geotechnical and remote sensing data to assess the stability and the hazard associated with a large Deep-seated Gravitational Slope Deformation (hereinafter DGSD) located in the Pisciotta municipality (Campania Region, Southern Italy). The landslide develops in an exceptionally deformed turbidite series composed of intercalated calcarenites, marls and mudrocks (De Vita et al., 2013). Previous studies highlighted the activity of the landslide mass in the last decades and the related damage to the SR447 provincial road, which crosscuts the landslide mass.
A drone survey has been performed to assess the landslide mass surficial extension, which allowed to reconstruct a high resolution, centimetre scale Digital Surface Model (DSM) of the DGSD. The DSM has been exploited to identify the landslide boundary and its extension, which is approximately 0.19 km2, and assess the altitude changes by comparison with a 2m resolution Lidar-derived DSM, available from the literature.
In situ data, consisting of deep boreholes equipped with inclinometers, allowed us to estimate the landslide extension at depth. Inclinometer measurements identified several sliding surfaces located at different depths and correspond to layers of shale deposits with variable thickness (De Vita et al., 2013).
An accurate picture of the ground displacements associated with the landslide mass has been finally obtained by processing satellite Synthetic Aperture Radar (SAR) data through the SARscape software (sarmap SA), integrated into the ENVI environment. A large dataset of SAR images acquired with the Sentinel-1 satellite mission has been processed applying the Small Baseline Subset (SBAS) technique (Berardino et al., 2002). We exploited SAR images acquired along both ascending and descending mode and spanning approximately the September 2016 – October 2021 time interval to estimate the landslide mean velocity and cumulated movements along the satellite Line of Sight (LoS). The results show maximum displacement rates of approximately +22 cm/y and -30 cm/y along the ascending and descending orbits, respectively. Positive and negative rates indicate ground movements towards and away from the satellite sensor, respectively. The deformation trend of the DGSD landslide is almost linear, suggesting a probable ongoing creep process, with local acceleration/decelerations related to the dry and wet seasons and correlated with rainfall magnitude.
References:
De Vita, P., Carratù, M. T., La Barbera, G., & Santoro, S. (2013). Kinematics and geological constraints of the slow-moving Pisciotta rock slide (southern Italy). Geomorphology, 201, 415-429.
Berardino, P., Fornaro, G., Lanari, R., & Sansosti, E. (2002). A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms. IEEE Transactions on geoscience and remote sensing, 40(11), 2375-2383.
The current and future availability of unprecedentedly dense Synthetic Aperture Radar (SAR) data from Sentinel-1 and upcoming NISAR missions has sparked the need to efficiently produce unbiased ground displacement time-series at fine resolution and in near real time. The state of the algorithms (e.g., Ansari et al 2017) impose a long latency of a few months to update an existing Interferometric SAR (InSAR) time-series with new acquisitions. Such a long latency could not satisfy the need for some applications such as urgent response to natural and anthropogenic hazards, which usually requires a short latency of 24-72 hours from the acquisition time or shorter.
Given the existing unprecedented InSAR big-data, producing displacement estimates for latest acquisitions with short latency (e.g., 24-72 hours from the acquisition time) requires novel algorithms to update the archived displacement time-series. These algorithms need to estimate displacement for the new acquisition in contrast to reprocessing the entire archive with every acquisition. Moreover, such estimated displacement over distributed scatterers should be comparable with displacement at nearby permanent scatterers, i.e., the displacement time-series over distributed scatterers should be unbiased.
The short latency requirement can be met with slight modifications to the short temporal baseline InSAR time-series analysis algorithms. On the other hand, the unbiased estimation of the displacement time-series requires the usage of all possible pairs of interferograms (a full covariance matrix) to significantly reduce the impact of non-closing triplets of the multi-looked interferometric phases on the estimated displacement. The processing of a full covariance matrix of interferometric phases is computationally expensive, making the traditional estimation algorithms impractical for InSAR big-data. A sequential estimator proposed by Ansari et al 2017, provides an efficient algorithm for processing the full covariance matrix of interferometric phases in batches. While the proposed algorithm is big-data friendly and results in unbiased (or significantly less biased) displacement time-series estimates, the algorithm imposes a few months latency to update the displacement time-series from latest acquisitions. In particular, when the displacement time-series is estimated for the existing stack of acquisitions, the algorithm is required to hold the estimation until enough new acquisitions are accumulated to form a batch of SLCs called mini-stack. Such pause on the production can lead to a latency of 4 months for a ministack of 20 SAR images , assuming 6 days revisiting time.
We propose a modified sequential estimator which allows us to estimate the displacement time-series in near real time as the new acquisitions come in. Starting from a mini-stack size of M, the new algorithm allows the mini-stack to expand up to 2M-1 acquisition as the new acquisitions arrive, which then shrinks back to a size of M and the process continues for future acquisitions. At each shrinking stage of the ministacks a compressed SLC which is a linear transformation of all SLCs in that latest ministack is estimated and used to form interferograms between the actual and compressed SLCs, i.e., ensuring the contribution of the long temporal baseline interferograms into the estimation of displacement at each acquisition. We compare the accuracy of the proposed algorithm to that of traditional full covariance matrix estimators using simulated data with different decorrelation models representing real world scenarios and with short-lived signals over time as described in Mirzaee et al, 2021. We also compare the results with traditional small temporal baseline estimates of displacement corrected for closure phase impacts as suggested by Zheng et al 2021. We further demonstrate the performance of the new algorithm using Sentinel-1 data over southern San Andreas and Central San Andreas faults where the ground displacement signal is known from independent GNSS data. Our results show that the new algorithm can reach accuracies comparable with traditional full covariance methods while also satisfying the very short latency requirements to produce displacement time-series in near real time.
The project „Time-variable gravity and mass redistribution from synergistic use of GRACE-FO and Chinese gravity satellites“ aims at significantly improving gravity field estimations from the existing GRACE and GRACE-FO missions and explores the possibilities of a joint observing scheme in combination with the upcoming Chinese gravity field mission TianQin-2. This work is a joint initiative of National Natural Science Foundation of China (NSFC) and the German Research Foundation (DFG).
GRACE and GRACE-FO provide near-continous observations since 2002 for monitoring mass transports within the Earth system. We will establish an improved gravity field modeling methodology to fully explore the potential of GRACE-FO observations, e.g. by optimizing dealiasing signals, refining the noise modeling, accelerometer calibration and optimizing anisotropic filtering techniques.
TianQin-2 is currently being planned and could provide an unprecedented opportunity for mapping mass redistribution with higher spatio-temporal resolution and accuracy. We will determine optimal orbit parameters for the two cases that TianQin-2 will be an inclined pair, i.e. should be used in combination with GRACE-FO, or a polar pair. Full-scale simulations of the joint observing scheme will shed light on the scientific application potentials.
One focus area will be the East China Sea. We will isolate the ocean mass change signal over this study region and apply a joint inversion framework to close the regional sea level budget. The contribution of sediment discharge will be accounted for by evaluating oceanic velocities from an ocean model using a Lagrangian approach.
Groundwater storage (GWS) variations will be closely investigated over the North China Plain. First, GWS is calculated by evaluating GRACE(-FO) gravity solutions. For this aim, non-GWS compartments will be removed using global and regional models. An error-estimation considers the uncertainty of measurement errors, post-processing of the gravity field solutions and model errors. Secondly, GWS will directly be inferred from hydrological models. Observations from monitoring wells and GPS stations will be used as independent additional observations.
In this contribution, we will introduce our project and show some preliminary results.
The JULIA (Jülich In-Situ Airborne) data base comprises water vapor measurements from more than 50 research aircraft field campaigns spanning over a time period of more than 20 years starting in 1997. The focus of the aircraft measurements is on the upper troposphere and lower stratosphere (UTLS) region including also valuable in-situ observations in the tropical stratosphere up to altitudes of 21km.
We analyze the JULIA water vapor data set together with JETPAC MERRA2 meteorological reanalysis data in the frame of the SPARC OCTAVE-UTLS initiative. In-situ research aircraft data include high temporal and spatial water vapor variability due to their mission oriented focus and the measurement strategy. In addition, the dynamical variability and also the water vapor gradient is strongest in the tropopause region. Therefore, we apply different coordinate transformation to compare the UTLS data and their residual variability using a variety of geophysically-based coordinate systems (e.g., tropopause, equivalent latitude, jet-focused) by using meteorological information from the reanalysis data set. This approach provides a framework for comparing and validating measurements with diverse sampling patterns like in-situ and remote sensing observations. Further, it allows the analysis of water vapor trends in the UTLS region.
We show that water vapor variability in the UTLS is best reduced, if using a combination of jet-based and either dynamical or thermal tropopause based coordinates. A persistent variability of water vapor below a potential temperature of 350K remains and cannot be reduced by coordinate transformations. Trend estimates indicate a slightly negative water vapor trend in the lowermost stratosphere (LMS) and a positive trend in the upper troposphere but with a larger variability. However, trend estimates remain main highly uncertain below 350K due to the lower spatial resolution.
In summary, we will show the geographical coverage of JULIA data in combination with
coordinate transformations which can be used for dedicated satellite validation in the UTLS.
Altimetric Sea Surface Heights (SSH) are the observable of altimetry satellites and they are used for estimating various physical reference surfaces of the Earth. One of these is the Mean Sea Surface (MSS).
The covariance matrix of along-track altimetric SSH observations is rarely studied and often assumed as uncorrelated, i.e., diagonal. With the aim of the determination of a Mean Sea Surface (MSS) from altimetry data the role of a fully or partially occupied covariance matrix will be studied. Based on least-squares residuals from an initial altimetric Mean Sea Surface (MSS) approximation with a diagonal covariance matrix, we pursue a quantization of the stochastic model which is then included in a re-computation of the MSS leading to more realistic estimates and formal errors.
We derive one-dimensional empirical covariance functions along the satellite orbit, which are used for a verification of the stationarity over the region. Different mathematical covariance function models are assessed and evaluated for computational benefits. These covariance function models are either based on methodology from autoregressive (AR) processes or a compactly-supported equivalent is derived. This opens different ways of setting up the covariances either directly by a sparse covariance matrix or indirectly by decorrelation algorithms. Both strategies preserve the general sparsity of the normal equations, which is a key factor for the computation time and efficiency of the MSS approximation algorithm.
In a study over the Agulhas region below South Africa we use 10 years of data from various altimetric satellite missions. This suits as a test region for which the orbit arcs are long and no significant data gaps appear.
Peatlands are ecosystems of crucial importance due to their function on carbon sequestration, water cycle regulation and biodiversity preservation, among others. Their environmental services and the negative impacts of their alteration has been well documented. The first step to manage these areas in an optimal manner, is locating and quantifying them. However, this task is known to be expensive and arduous in terms of labour. This research intends to explore a method to detect and predict peatlands under forests at high resolution on a regional scale, using synthetic aperture radar signals in combination with ancillary data. If successful, the method would offer a less arduous and expensive alternative towards peatlands detection under forests and would consequently benefit and facilitate their localization and quantification.
The area of study is located in north-east Brandenburg - Germany, an area where ground truth soil data was obtained in regard to soil moisture, composition and ground water level. In the search to approach the goal of the research, a complex scheme methodology was considered. As inputs of a Random Forest model, different degrees of balancing and resampling of the evaluated datasets were arranged. Backscattering coefficient from synthetic aperture radar (SAR) sources were tested. For the pre-processing and processing of SAR data, the S1-toolbox from the SeNtinel Applications Platform (SNAP) was used. Random Forest package in R software was used to model and predict determining characteristics of peatlands: ArcMap 10.4. was also used in some of the spatial data management.
L-band SAR signals have proven better capabilities in the detection of soil and subsoil surfaces given its band frequency. In the same frame, HH/VV polarization has shown better penetration and versatility in the reconstruction of the backscatter matrix from different scatter mechanisms when soil surface features under vegetated areas are the target. Pixel values obtained from soil moisture retrieval using the integral equation model multi polarization inversion from L-band ALOS-Palsar signals, showed coherence and predictive capabilities.
Backscattering coefficient combined with terrain indexes and forest cover type were modelled and tested, and their accuracy assessed. This research has found a method and different models that performed better than prediction by random classification. The method and models showed good results in terms of prediction capabilities and accuracy metrics. They are relevant and show potential in the still explorative path of detecting/forecasting location of peatlands under forests, especially considering the upcoming launch of L-band and P-band SAR missions.
The Mediterranean basin is the third-richest hotspot in the world in terms of plant biodiversity and one of the greatest sources of endemic plants on Earth . Its plant diversity accounts for 25 000 plant species2, , 60 percent of which are endemic , of which more than 100 tree species are recorded in Mediterranean forests . It is estimated that the region has more than 25 million hectares of Mediterranean forests and about 50 million hectares of other Mediterranean wooded lands1.
Forest and wooded lands are, at the same time, one of the most threatened ecosystems in the Mediterranean because of the large population concentration (537 million people censed on 2015 ) and the high pressure from human activities, such as tourism, urbanization, agriculture, etc. . Finally, the Mediterranean is one of the most affected regions by climate change, worldwide , . Due to the human alterations of the landscape along millennia, only 5 percent of natural vegetation in the Mediterranean basin remains, the lowest of any hotspot .
Many international agreements (i.e. the Convention on Biological Diversity –CBD–; United Nations Framework Convention on Climate Change –UNFCCC – or the European Green Deal ) point out the need of preserving and restoring forests to tackle the Sustainable development Goals –SDG–, the EU forest strategy and the EU Biodiversity Strategy for 2030 .
For the conservation of forests and the identification of potential priority areas of restoration, first, it is essential to quantify and delineate them precisely. Forest area and tree species composition are, indeed, two of the seven indicators for Sustainable forest Management and an important aspect of Ecosystem Accounting . Forest cartography can help in understanding the state and evolution of forests, while identifying the drivers and impacts on forests in a spatial context.
To date, we count with several maps of forests globally and at the regional scale of the Mediterranean Basin, at different spatial and temporal scales , , . These maps, produced from remote sensing data, discern between open and dense forests, and generic forest types, such as evergreen, broadleaved, coniferous or mixed forest. In addition, national and supra-national space institutions have facilitated open access data and AI tools for environmental monitoring, such as the ESA Copernicus Sentinels.
Here we present the results of a methodological workflow that classifies Mediterranean forest types based on dominant species to better address the needs of many different stakeholders, such as decision makers, forest managers, land developers and researchers. The usefulness of this type of maps increases highly if produced yearly, to monitor changes in land cover, to track drivers of change in the landscape and to check effectiveness of measures taken about conservation and restoration. This map of Mediterranean forest types, based on the dominant species, was developed in the framework of the EnBIC2Lab -project, framed on a cooperation between remote sensing, big data and Mediterranean forest experts. The aim of the present map is to create a regional baseline with comparable information and similar accuracy across all countries in the Mediterranean basin, by using data and a methodology that covers the region equally, independently of the existence/accessibility or not of national inventories and databases.
In our presentation we tackle the challenges of mapping Mediterranean forests, because of the diversity and complexity of this landscape. Mixed forests and open forests are frequent in the Mediterranean area, which are difficult to map using reflectance only. For this reason, in this project we modelled the variables that define the Mediterranean forest characteristics using diverse remote sensing data. Two main data sources were used: ESA Copernicus Sentinel-2 MSI and the NASA’s ASTER Digital Elevation Model (DEM) . The biophysical properties of different forest types, defined by the dominant tree species, were defined by the multispectral data of Sentinel-2 MSI. Imagery of different seasons helped in the discriminating deciduous from perennial species, but also in separating species based on different phenology traits, such as plant productivity or flowering. The use of a DEM and the derived slope and aspect layers assisted in identifying the bioclimatic vegetation stages and species with different slope and illumination requirements , , . Other layers such as texture supported in differentiating forests from other classes, such as shrublands, tree crops and plantations. A layer of distance to rivers extracted from the Global River Classification –GloRiC– aided on defining riparian forests.
The relevance of the different remote sensing layers were evaluated by means of regression analysis and principal component analysis, per forest type. Once these layers were identified, forests were classified with a combined Artificial Intelligence technique, involving Random Forests and Regression Analysis extrapolation.
One of the major challenges of the project was to feed the models with reliable training samples. Several databases were filtered and harmonized to produce a database of forest types based on dominant species, co-occurrence of species and tree coverage. Among them, data was retrieved from the European Vegetation Archive –EVA–, national inventories (Spain , Italy and Lebanon), and expert’s counselling for the North-African countries .
Future work involves the use of Sentinel-1 data to improve the classification of certain forest types and other land cover classes, such as shrublands, plantations and tree crops.
1 Mittermeier, R.A., Robles Gil, P., Hoffman, M., Pilgrim, J., Brooks, T., Mittermeier, C.G., Lamoreux, J. & da Fonseca, G.A.B. 2004. Hotspots revisited. Earth’s biologically richest and most endangered terrestrial ecoregions. CEMEX Books on Nature. Mexico City, Mexico, CEMEX. 391 pp.
2 Blondel, J., Aronson, J., Bodiou, J.Y. & Boeuf, G. 2010. The Mediterranean region: biological diversity in space and time. Oxford, UK, Oxford University Press, 2nd edn. 392 pp.
3 Myers, N., Mittermeier, R. A., Mittermeier, C. G., Da Fonseca, G. A., & Kent, J. (2000). Biodiversity hotspots for conservation priorities. Nature, 403(6772), 853-858.
4 Thompson, J.D. 2005. Plant evolution in the Mediterranean. Oxford, UK, Oxford University Press. 304 pp.
5 Fady-Welterlen, B. 2005. Is there really more biodiversity in Mediterranean forest ecosystems? Taxon, 54(4): 905–910.
6 World Bank. 2015. Population estimates and projections. In: World Bank Open Data [online]. Washington, DC, World Bank Group.
7 Dernegi, D. 2010. Profil d’écosystème. Hotspot de la biodiversité du bassin méditerranéen. Arlington, USA, Critical Ecosystem Partnership Fund. 258 pp.
8 Giorgi, F. 2006. Climate change hot-spots. Geophysical Research Letters, 33(8): L08707.
9 IPCC. 2007. Climate change 2007: The physical science basis. Contribution of working group I to the fourth assessment report of the Intergovernmental Panel on Climate Change. Cambridge, UK, Cambridge University Press. 996 pp.
10 Sloan, S., Jenkins, C.N., Joppa, L.N., Gaveau, D.L.A. & Laurance, W.F. 2014. Remaining natural vegetation in the global biodiversity hotspots. Biological Conservation, 177: 12–24.
11 Glowka, L., Burhenne-Guilmin, F., Synge, H., IUCN Environmental Law Centre., & IUCN Biodiversity Programme. (1994). A guide to the Convention on Biological Diversity. Gland, Switzerland: IUCN--the World Conservation Union. https://www.cbd.int/ [Last access: November 2021]
12 United Nations Framework Convention on Climate Change (1992). New York: United Nations, General Assembly. https://unfccc.int/ [Last access: November 2021]
13 https://ec.europa.eu/info/strategy/priorities-2019-2024/european-green-deal_en [Last access: November 2021]
14 https://www.un.org/sustainabledevelopment/sustainable-development-goals/ [Last access: November 2021]
15 https://ec.europa.eu/environment/strategy/forest-strategy_es [Last access: November 2021]
16 https://ec.europa.eu/environment/strategy/biodiversity-strategy-2030_en [Last access: November 2021]
17 Shvidenko A, Barber CV, Persson R. 2005. Forest and woodland systems. In: Hassan R, Sholes R, Ahs N, ditors. Ecosystems and human well-being: current state and trends. Volume 1. Washington, DC, USA: Island Press. pp 585 – 621.
18 https://seea.un.org/ecosystem-accounting [Last access: November 2021]
19 https://land.copernicus.eu/global/products/lc [Last access: November 2021]
20 https://www.globalforestwatch.org/ [Last access: November 2021]
21 https://land.copernicus.eu/pan-european/corine-land-cover [Last access: November 2021]
22 http://www.etc.uma.es/enbic2-lab/ [Last access: November 2021]
23 https://www.esa.int/Space_in_Member_States/Spain/SENTINEL_2 [Last access: November 2021]
24 NASA/METI/AIST/Japan Spacesystems and U.S./Japan ASTER Science Team. ASTER Global Digital Elevation Model V003. 2019, distributed by NASA EOSDIS Land Processes DAAC, https://doi.org/10.5067/ASTER/ASTGTM.003. Accessed 2021-10-11. https://asterweb.jpl.nasa.gov/gdem.asp [Last access: November 2021]
25 Rivas-Martínez, Salvador. Mapa de las series de vegetación de España, Ministerio de Agricultura, Pesca y Alimentación, Instituto Nacional para la Conservación de la Naturaleza, 1987.
26 Ruiz de la Torre, J.; Ceballos y Fernández de Córdoba, L. (1971): Árboles y arbustos de la España peninsular.
Madrid: Instituto Forestal de Investigaciones y Experiencias: Escuela Técnica Superior de Ingenieros de Montes, 512 págs.
27 Ruiz de la Torre, J., 1990. Distribución y características de las masas forestales españolas. Ecología, Fuera de Serie 1, 11-30.
28 https://www.hydrosheds.org/page/gloric [Last access: November 2021]
29 Chytrý M. et al. 2016. European Vegetation Archive (EVA): an integrated database of European vegetation plots. Applied Vegetation Science 19: 173–180. http://euroveg.org/eva-database [Last access: November 2021]
30 https://www.miteco.gob.es/es/cartografia-y-sig/ide/descargas/biodiversidad/mfe.aspx [Last access: November 2021]
31 https://www.inventarioforestale.org/en [Last access: November 2021]
32 https://www.northafricatrees.org/ [Last access: November 2021]
It is well known that climate change is a major concern and green transition has become the main challenge of our times. Due to the European Green Deal, monitoring and reporting on the state of nature gained significant importance in the European Union with the implementation of a new 2030 Biodiversity Strategy in order to enlarge existing Natura 2000 areas, eradication the degradation of ecosystems, manage them sustainably, and strict protecting.
Dobrogea is located in southeastern Romania, being delimited on three sides by the waters: the Danube River (West and North) and the Black Sea in the east. Its biogeographical uniqueness is also given by the intersection of two important biographical regions: steppe and pontic so that a number of habitats have been declared protected areas. Thus, Dobrogea is a rich region in terms of protected areas with important Natura 2000 forest ecosystems as it is Hagieni Forest, located in the South Dobrogea Plateau, and Babadag Forest, located in the area of the Babadag Plateau who is one of the representative forests in the North Dobrogea landscape.
The aim of this study is to monitor the current conservation status and to identify changes over time between 2007 (the year of Romania's accession to the European Union and the establishment of the Natura 2000 Network) and 2018. The methods of comparison and statistical data, realized by using N2K Land Use Land Cover status and change products, delivered by the Copernicus Land Monitoring Service, led to the identification of change and no change areas. Furthermore, the results of the Index of Change analysis allowed the classification of changes from the highest to the lowest degree of changes. Based on these analyses, the study highlights the usefulness of the Copernicus Land Monitoring Service for values mapping.
Natura 2000 sites contribute to the minimization of biodiversity loss and environmental deterioration. Mapping and monitoring up to date status of this area are vital because, like any other protected area, protection of some Natura 2000 sites is poorly supported at the local level and most of the time have to deal with issues relating to planning, management, and also with the resistance among local communities which most of the time are misinformed about the purpose and benefits of these areas.
Copernicus Land Service data represents a powerful tool to record and assess changes in habitat quality due to land-use change. The study underlined preliminary results, based mainly on this data, and a detailed reporting habitat conservation study will continue with complementary information on many aspects of habitat at different spatial levels to facilitate future analysis of Natura 2000 areas valuations products. At the same time, the study highlights the need to increase awareness, concern about environmental issues, including threats to biodiversity and the loss of green space.
Macroalgae growing in the litoral and submerged in Arctic regions are important part of ecosystem as they form a habitat for aquatic macrofauna. Over the past decades macroalgae communities have been greatly influenced due to rapid climate change especially in the Svalbard and the whole Arctic area. These submerged macroalgae are exposed during a low tide, in this way it is possible to capture them using remote sensing techniques: drone, space borne optical and Sentinel-1 SAR (synthetic aperture radar) imagery. The advantage of using SAR for surface mapping it is not influenced by clouds; however, there is no studies on testing of this technique for mapping of littoral macroalgae. The aim of this study is to test the suitability of space borne SAR imagery for macroalgae mapping in Arctic region during a low tide.
For water, land cover and macroalgae in the intertidal zone classification GRD (ground range detected) images from Sentinel-1 mission are used, which is composed of a constellation of two satellites. To assure accurate results, Sentinel-1 images for training and classification chosen by these criteria: low tide, low wind speed, ice-free and as close as possible to the dates of sampling. Sentinel-1 imagery mapping done using smile (statistical machine intelligence and learning engine) Random Forest training and classification. Google Earth Engine servers are used to train a classifier, classify an image and evaluate training parameters and accuracy. For SAR data Random Forest classification training, the sites captured by on ground photography and drone (DJI Phantom 4 Advanced) imagery, during 2019 July and 2021 July - August expeditions in Isfjorden, Trygghamna and Eidembukta, Svalbard are used. Drone ortophotos and on ground photos from expeditions compared with SAR image VV and VH backscatter values and specific backscatter values for different types of surfaces is chosen for further classification. Furthermore, drone ortophotos and mosaics are used to evaluate and compare SAR image classification results.
Results will show an area of macroalgae distribution in Isfjorden, Svalbard. The areas will only consider macroalgae exposed from underwater during a low tide. These results will help to map and understand on what kind of substrate macroalgae are growing, and what is the extent of macroalgae habitat in Svalbard area.
Mediterranean forests are important natural resource, as they provide a range of ecosystem services. Retrieval of spatial explicit information on the wood provisioning forest ecosystem services is essential for sustainable forest management and establishing reporting mechanisms for resources at regional, national and international level. Remote sensing has been effectively used in forest inventory and monitoring, providing accurate information service of growing stock volume (GSV). Notably, Sentinel-2 Multi Spectral Instrument (Sentinel-2) is proved as a reliable information source for forest attributes and successfully used for GSV estimation.
The open access data from the European Space Agency’s Sentinel missions, the enhanced characteristic of optical sensors along with the state-of-art learning algorithms contribute to robust retrieval methods for accurate forest attributes estimation. A broad range of statistical techniques, from simple or multiple linear regression techniques to sophisticated machine learning (ML) methods, have been used to access forest stand parameters, such as volume, basal area, and biomass.
Selection of the most efficient machine learning algorithm for a given region is still a challenging process and an open field of research. Stack generalization ensemble algorithms can harness the capabilities of a range of well-performing models, combining their predictions for increasing the overall accuracy.
In this respect, we examine the performance of three well-established algorithms namely Generalized Linear Models (GLMs), Random forest (RF), Gradient boosting machines (GBM) to estimate GSV, as base learners using Sentinel - 2 imagery and field inventory data and, in a heterogeneous Mediterranean forest area in Northern Greece. Subsequently, a stack generalization model using the Super Learner (SL) ensemble algorithm is introduced combining the base model predictions.
All four machine learning models performed similarly in terms of accuracy presenting satisfactory results. Regarding the base learners, the RF model had the best performance (R2 = 0.85) while the GLM performance was the least satisfactory (R2 = 0.80). The stacked model outperformed all three base learners, however it achieved a small performance gain (R2 = 0.88), while a statistical test indicated that the learners’ accuracies were not statistically different from each other.
Overall, the findings of this study confirms that stacked generalizations do not improve prediction accuracy considerably, if base learners present highly correlated predictions. Future research will consider larger training datasets and several base learners with higher variability
Knowing the vegetation condition on a global scale is important for many applications such as agricultural yield prediction, fire hazard or drought monitoring. Of special interest are vegetation indices, which do not require extensive expert knowledge about earth observation data and are thus easy to interpret for policy makers.
Current optical indices, such as the Vegetation condition index based on NDVI, are limited by cloud coverage and saturation effects . As an alternative, we propose to use a microwave based index, which is unaffected by cloud coverage and offers deeper penetration into the vegetation, albeit at a lower spatial resolution . In the microwave domain, data from various space-borne microwave missions are available from the late 1970s onward. From these observations, vegetation optical depth (VOD) can be estimated, which is an indicator of the vegetation water content. While long-term VOD changes can be attributed to biomass changes, short term deviations are due to fluctuations in relative plant water content and therefore an indicator of plant water status and stress . A VOD-based water-content related index therefore shows potential to supplement traditional optical indices which based on the greenness of the vegetation.
The Standardized VOD index (SVODI) is calculated using a new probabilistic merging method. VOD derived via the Land-parameter retrieval model (LPRM) from multiple microwave radiometers of the past 30 years is used as input. The index values should only depend on the vegetation conditions and be unaffected by the changing quality and quantity of the microwave observations over this period. However, traditional index generation methods assume that the data are evenly distributed over time and have similar error characteristics over the entire period. We present improvements to these methods which deal with these limitations. In theory the index generation method is not limited to VOD and could be applied to completely different variables.
We show that SVODI exhibits similar temporal patterns as the well established optical vegetation condition index (VCI) in the subtropics. In more heavily forested regions such as the tropics or the boreal forests the correlation is very weak, indicating that SVODI is sensitive to different types of vegetation disturbances than VCI. SVODI therefore allows to extend our understanding of vegetation responses to extreme weather conditions. In regions where water availability is the main limit of vegetation growth, SVODI shows as expected similar patterns as meteorological drought indices. Extreme SVODI values also follow the climate oscillation indices SOI and DMI in the relevant regions.
Seagrasses offer a wide range of ecosystem services, heralded as natural climate solutions, which are fundamental for sustaining the wellbeing and resilience of humans and the natural environment. These vegetated coastal foundation organisms are one of the world’s most productive ecosystems and play an important yet often overlooked and underassessed role in climate change mitigation and adaptation, biodiversity maintenance, and coastal protection from extreme weather events. Their carbon sequestration and storage potential can support a variety of Multilateral Environmental Agreements like the Nationally Determined Contributions of the Paris Agreement, the EU Green Deal, and the Sustainable Development Goals. Standardized, comprehensive and spatially explicit knowledge of national seagrass extent, condition and ecosystem services is crucial for meticulous seagrass ecosystem accounting.
Within the Global Seagrass Watch project, funded by the German Aerospace Center (DLR) and supported by the Group on Earth Observations-Google Earth Engine program, we processed 18,881 single images to create a multi-temporal Sentinel-2 composite using the cloud computing platform Google Earth Engine to quantify the seagrass extent, and associated carbon stocks and sequestration rates for Bahamian waters.
Preliminary results yield a seagrass extent larger than the land area of The Bahamas, which can store approximately 1,101 Mt CO2, and sequester 26 times more CO2 than annually emitted by the country. However, only about 11% of the Bahamian seagrass area lies within Marine Protected Areas.
Our generated national data inventories underline the necessity of implicating seagrass blue carbon into national climate agendas and showcase the need for stronger and more cost-effective conservation and restoration efforts for seagrass meadows. Moreover, our data and technology can help to estimate the economic value of Bahamian seagrasses and their ecosystem services, demonstrating the importance of Earth Observation applications for ecosystem accounting frameworks like the System of Environmental-Economic Accounting (SEEA) - Ecosystem Accounting. We envisage that integrating Earth Observation into biophysical modelling could support holistic solutions for climate change mitigation, marine spatial planning, and biodiversity research within and beyond The Bahamas.
Based on the latest satellite data from the European Space Agency under the Copernicus program, which has been available since 2016, it is possible to receive satellite images of every land cover point on a regular basis. Such data are provided with a spatial resolution of 10 meters, which opens up a huge number of open problems for scientists and researchers.
At the same time, there is an urgent problem for Ukraine that almost every country is currently struggling with - the monitoring of waste storage, including human activities, and their storage or use for re-production or disposal. About 22,6 thousand unlicensed disposal dumps were located according to information of Ministry for Communities and Territories Development of Ukraine. It happens because a lot of legal landfills are filled and for some territories, they aren’t existing at all. That is why there is a need for monitoring and tracking of existing landfills. Currently, there are services that track the location of landfills, but these are only point indicators. Such services do not provide information on the area of the landfill and its changes over time, while our landfill monitoring algorithm, thanks to the possibility of using historical satellite data, makes it possible to track the area of landfills over time.
There are a lot of studies that describes different technologies of landfills monitoring based on satellite data. Each study based on different satellite data and artificial intelligence approaches. For example, for Iran authors compare four satellite providers with different parameters and spatial resolution for mine waste dump monitoring [1], and in [2] authors analyzed different indexes and temperature of landfills. Different CNN architects are used as neural network algorithms in the world [3], [4] for landfills identification.
In this paper, within the project “Landfill detection and monitoring service” we propose an algorithm based on neural networks and satellite data, which will allow automated monitoring of landfills on remote territory, as well as the ability to assess retrospective information and monitor changes in time. During the project implementation there were some difficulties. In particular, as landfills are quite dynamic and often change their contours, the neural network model had to be set up on the basis of one satellite image for a specific date and accordingly the time series of satellite data was used only to track the dynamics of a particular landfill.
Each of the classification methods has its advantages and disadvantages. The pixel-based method identifies artificial objects well, but does not cope well with the separation of landfills from artificial objects and quarries or sands. The object-based method, in turn, identifies landfills well, but also identifies some parts of cities that are similar in spectral characteristics with landfills. In our study the main problem is the separation of landfills from quarries and artificial objects was solved. In this, we used a fusion of pixel-based [5] and object-based [6] classification, which helped to identify those areas that belong to the class of landfills.
For the pilot areas (Olhynska, Pokrovska, Myrnohradska and Kurakhivska regional territorial communities of Donetsk region, Ukraine) the first results have already been obtained, in particular the landfills found according to Planet and Sentinel-2 satellites. In the future, we plan to expand our product first of all for the Donetsk region, and then to the whole of Ukraine. We plan to involve the city and municipalities, environmental authorities, directly - the Department of Ecology and Natural Resources of the Donetsk State Regional Administration and the State Ecological Inspectorate in the Donetsk region. The preliminary results are presented on the web interface at the link http://inform.ikd.kiev.ua/ldms/.
The project “Landfill detection and monitoring service”, which is the winner of the EastCode2021 national innovation competition, is implemented by the NGO “Open Initiatives” under the technical administration of Center42 within the UN Peacebuilding and Peacebuilding Program and with financial support from the governments of Denmark, Switzerland and Sweden.
References
[1]. Khosravi, Vahid, et al. "Satellite Imagery for Monitoring and Mapping Soil Chromium Pollution in a Mine Waste Dump." Remote Sensing 13.7 (2021): 1277.
[2]. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Iclr, (2015) 1- 14.
[3]. Adedeji, Olugboja, and Zenghui Wang. "Intelligent waste classification system using deep learning convolutional neural network." Procedia Manufacturing 35 (2019): 607-612.
[4]. Torres, Rocio Nahime, and Piero Fraternali. "Learning to Identify Illegal Landfills through Scene Classification in Aerial Images." Remote Sensing 13.22 (2021): 4520.
[5] Kussul, Nataliia, Mykola Lavreniuk, and Leonid Shumilo. "Deep Recurrent Neural Network for Crop Classification Task Based on Sentinel-1 and Sentinel-2 Imagery." IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2020.
[6]. Shumilo, Leonid, Nataliia Kussul, and Mykola Lavreniuk. "U-Net Model for Logging Detection Based on the Sentinel-1 and Sentinel-2 Data." 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. IEEE, 2021.
The EU Biodiversity Strategy and the 7th Environment Action Programme highlight the importance of halting the loss of biodiversity and ecosystem services. This is achieved by preserving ecosystems and fully integrating environmental requirements into policymaking to address climate change. The EU H2020 project “Marine Coastal ecosystems Biodiversity and Services in a changing world” (MaCoBioS) aims to generate new knowledge on inter-relations between climate change, biodiversity, and ecosystem services, and to ensure effective knowledge transfer to relevant stakeholders. To achieve this goal, several marine and coastal ecosystems, representative of different marine ecoregions, are being studied. Among these, the mangroves forests are considered because of their ecological and socio-economic importance for coastal communities, such as those in the small island countries of the Lesser Antilles.
Mangrove forests support high species diversity and provide a wide range of ecosystem services, including provision, regulation and maintenance, and cultural services. Among the regulating services, mangrove forests are essential in reducing coastal vulnerability to flooding and erosion and help reduce carbon dioxide from the atmosphere through photosynthesis and additional processes to bury the atmospheric carbon into the soil, thereby mitigating the effects of climate change. Indeed, these ecosystems maintain vast below-ground carbon stores per unit area. Despite their ecological and social importance, these areas are affected by various environmental (e.g., hurricanes, sea-level rise, hyper-sedimentation) and anthropogenic pressures (e. g., pollution, organic loading, coastal development, overfishing). Monitoring the extent and the ecological condition of mangroves over time has become a priority to understand how these changes affect their structure, functioning, and the ecosystem services (biophysical and monetary) they provide. Such spatially explicit data and information constitute the five core components of the United Nations (UN) System for Environmental Economic Accounting (SEEA) Ecosystem Accounting (SEEA EA), an integrated and comprehensive statistical framework adopted by the United Nations Statistical Commission in March 2021, which can be applied to a wide range of policies and decision-making processes that support the global sustainability agenda.
However, conducting routine field monitoring programmes in mangrove forests is challenging due to the complexity of these environments, their difficult access, and their remoteness. Recent advances in the availability of remote sensing data, image processing methods and computer, and information technology have made it possible to regularly observe and monitor mangroves at different spatial scales and locations. In this context, the technical capabilities of the recent Sentinel-2 mission may aid progression in this field. The Sentinel-2 mission currently consists of two twin satellites, Sentinel-2A, launched in 2015, and Sentinel-2B, launched in 2017. In the future, two additional satellites, Sentinel-2C and Sentinel-2D, will keep the continuity of the mission. These satellites provide data in 13 optical bands, covering visible, NIR and SWIR wavelengths with spatial resolutions of 10 m, 20 m, and 60 m. Due to their spatial resolution, some of these bands are of particular interest in small islands for the high/medium resolution mapping of mangroves. The revisit time of Sentinel-2 satellites (5 days at the equator) is also of great benefit to allow monitoring of mangrove functioning in highly cloudy regions such as the Lesser Antilles.
The objective of this study is to evaluate Sentinel-2 for mapping and monitoring Net Primary Productivity (NPP) in mangrove areas of small island regions, using Martinique (French West Indies) and Bonaire (Caribbean Netherlands) as the main case studies. NPP estimates can be used as a proxy for organic production of the forest in a given area at a given time and thus can provide valuable information on the state of the ecosystem and its functioning. NPP estimates not only allow us to assess the impact of anthropogenic pressures or climate-related events on the condition of mangroves, but they are also directly related to the function of carbon sequestration and thus are a key measure for assessing the service of climate regulation.
The traditional method to determine NPP requires the assessment of above- and below-ground wood production and litterfall biomass, which is very challenging and time-consuming. Therefore, estimating NPP through remote sensing opens a vast avenue of research that could improve the temporal and spatial scales of mangrove studies. Looking at the NPP through remote sensing makes it possible to 1) understand mangroves functionality and productivity under various pressures and environmental conditions, and 2) help us project future changes on their ecological condition and benefits to people, according to climate scenarios and management measures. Currently, NASA produces NPP products derived from MODIS at 500 m of spatial resolution. However, this product presents some limitations for mapping certain processes in small areas like the ones contemplated in this study.
To achieve this goal, several Sentinel-2A and 2B images with low cloud cover were downloaded from the Copernicus Open Access Hub at the 1C level, Top-of-Atmosphere (TOA) reflectance. The images were atmospherically corrected using the iCOR processor before using the NPP model. The semi-empirical NPP model was calibrated through canopy cover and leaf area index (LAI) data collected in the field. Ground truthing data were collected in Bonaire (N = 3 stations) and Martinique (N= 13 stations) in 2021 in mangroves presenting a gradient of environmental conditions from “completely destroyed” to “very good”. We haphazardly selected a 100 m2 plot to characterize the forest structure at each station, where the species type, tree density, tree height, and stem diameter were recorded. If tree density was too high in the plot, we reduced the plot size to 25 m2. In Martinique, continuous belt transects extending perpendicular from the strandline to the seaward fringe completed the plots. Within each plot, canopy cover was visually assessed, and a minimum of 19 light measurements were taken using a digital luxmeter. Light measurements were also taken in direct sunlight in the nearest open space adjacent to the study plot. Next, LAI was determined by calculating the ratio of mean light intensity within the plot to light intensity in direct sunlight. The results obtained from the analysis of the Sentinel-2 images were compared with the NASA NPP (MOD17A3) product. In a second step, these NPP estimates from Sentinel-2 are to be compared with forest structure assessments conducted within each station.
This study provides NPP estimates with unprecedented spatial resolution and continuity in the case studies examined. The preliminary results from the comparison of Sentinel-2 NPP estimates and the NASA NPP (MOD17A3) product are promising and open new grounds for long-term monitoring. Using Sentinel-2 NPP estimates will help understand how anthropogenic and natural processes affect the extent, the ecological condition, and the functions of these valuable habitats at the spatial and temporal scales required to make informed decisions for their management and protection. Furthermore, the long-term monitoring capabilities offered by Sentinel-2 data might help meet the growing requirement of global environmental policy frameworks, including the post-2020 Global Biodiversity Framework, the UN Sustainable Development Goals, and the UN System for Environmental Economic Accounting (SEEA), particularly in the face of global changes.
In times of climate and biodiversity crisis, it is paramount to have robust methods at hand that allow for quick and frequent monitoring of ecosystem properties. Remote sensing techniques facilitate ecosystem change monitoring on larger scales and within shorter time (intervals) compared to ground-based vegetation surveys, albeit with lower accuracy. The retrieval of biophysical properties such as leaf chlorophyll content (LCC) from spectral signals helps to gather information on plant health or stress and to improve the management of agricultural areas.
For validation of predicted LCC, laboratory measurements of in-situ samples are always of need. The traditional and most accurate method for LCC determination is a quantitative chemical analysis via spectrophotometry. However, this destructive technique is labour, time and money consuming. Hence, it has become common practice to use a handheld chlorophyll-meter, such as the SPAD-502 (Konica Minolta) in the field for straightforward and non-destructive measurement of indicator values for LCC. It allows for relatively little laboratory work as only a subset of samples is needed to establish a species specific relationship between SPAD measurements and LCC based on wet chemical analysis. Correlations between SPAD values and LCC have been described in former studies by linear or non-linear equations, depending on the species of interest. Correlations have mainly been tested for single agricultural plant species, such as wheat, potato, maize, soy bean or rice. Only few studies exist that focused on wild growing plants in general. Further, it is strongly recommended to perform a calibration for every SPAD-meter in use and for every single species addressed by the research question. Such an approach is not feasible in the context of field work in natural or semi-natural habitats.
Although of promising value for their monitoring and management, the retrieval of LCC from SPAD values has not been tested for natural ecosystems with mixed species compositions yet. Therefore, our main research questions are:
1. How reliable are SPAD value conversions to LCC in semi-natural grassland areas with mixed species
compositions?
2. Can we apply established correlations between SPAD values and LCC to semi-natural grassland areas in order to determine LCC of different species in mixed stands?
Using plant samples from three structurally different grassland areas (a floodplain meadow, a nutrient-rich grassland and a dry, nutrient poor grassland), we performed regression models to test the relations between SPAD values taken in the field and corresponding LCC gained from chemical analysis using spectrophotometry in the laboratory.
First results show, that SPAD-LCC relations are not strongly pronounced in semi-natural grasslands with mixed species compositions. We found R² values up to r²=0.34 when samples with specific leaf features were excluded from the analysis. Special characteristics of the leaf that enhance leaf reflectance (f.e. waxy cuticle or trichomes) are likely to be distorting factors when assessing SPAD-LCC relationships. Further, popular calibration curves for single species only apply to some extent to semi-natural grasslands. As a result, we expect that potentially not more than the range of LCC can be derived from SPAD values for the area of interest. This could, nevertheless, still suit studies implementing RTMs which require a LCC range as input, not exact chlorophyll values for each single species included in the model.
Generally, in order to achieve reliable results, the effect of light transmission altering leaf structures must be carefully taken into account when determining a functional relationship between SPAD and LCC.
With this study, we demonstrate that applying commonly used SPAD-LCC models might not be appropriate in semi-natural grasslands. It underlines the need to find solutions for quick and labour extensive LCC determination in natural habitats with mixed species compositions to allow for reliable monitoring of ecosystem properties based on Earth observation data.
Fine-scale and up-to-date forest type characterization and mapping is essential for sufficient management by forest agencies and for providing updated forest inventory maps. However, at national scales, forest type mapping can be a challenging task, even through remote sensing data, not only due to the large processing power and computational time required, but also because of differences in vegetation phenology caused by elevation and topography, or inter-annual climate variability.
Previous research until now, has mainly addressed the problem in local scales. However, cloud computing platforms nowadays provide an opportunity for national-scale studies by providing large computational resources and allowing the integration of all available images from medium-resolution satellite sensors. In addition, the abundance of data and computational resources allows for the exploitation of spectral variability through time, by utilizing temporal metrics (such as phenology and productivity metrics) to overcome phonological variability. Moreover, object-based classification approaches have been proven to considerably reduce processing time in analyses over extended geographical areas and for large datasets.
This study focuses on the development of a workflow for object-based forest type mapping, for the whole Greek forested territory. The processing was implemented in the Google Earth Engine cloud computing environment using 10m spatial resolution Sentinel-1 and Sentinel-2 data. All available images were used and combined to create seasonal composites. A Random Forest classification algorithm was employed using spectral bands, multispectral indices, spectral-temporal metrics as well as elevation information. The complex classification nomenclature included 9 classes and the reference data samples were collected using forest inventory maps and the Natura 2000 product. The high accuracy results the classification yielded, indicate that the proposed methodology can effectively classify the different forest types across different gradients (elevation, aspect, and climate) across the country. The Random Forest variable importance revealed the high value of the temporal-metrics, which appeared amongst the 20 most important predictor features.
There are a plethora of Earth Observation satellites currently in orbit, many of which collect free and open data. The EU's Copernicus satellite programme, operated by ESA, gives operational data from pairs of Sentinel-1 and Sentinel-2 satellites. These collect ~weekly observations globally, in narrow optical bands and dual polarisation Synthetic Aperture Radar (SAR) at 20m resolution or better, and are by themselves an incredibly powerful tool for mapping and monitoring the biosphere. It is increasingly clear that dense stacks of Sentinel-1 and Sentinel-2 data, consisting of all observations of a site over a 12 month period or more, can be used to map a wide variety of parameters, from landcover type and canopy cover through to water stress and aboveground biomass, when accompanied by advanced machine learning techniques in powerful cloud computing environments.
This operational foundation is now being built upon with LiDAR from GEDI and ICESAT-2, longer wavelength SAR from the upcoming NISAR and BIOMASS missions (NASA/ISRO & ESA respectively), and commercial constellations offering higher spatial and temporal resolutions, and soon higher spectral resolutions too. But it is unclear how exactly these further datasets will be integrated into analysis pipelines. And will further datasets necessarily produce more accurate output maps: or does the risk of overfitting involved in Convolutional Neural Networks and other deep learning techniques mean that more data will not necessarily produce more accurate results. This is particularly a risk for looking at the changes in difficult-to-map variables, such as biomass or biodiversity: can we ever trust change maps produced from differencing machine learning produced maps using multiple types of satellite data?
In this talk I will present a synthesis of results from a few different studies I am involved with in the tropics, leveraging biomass change data from actively manipulated forest plots in the Tropical Forest Degradation Experiment (FODEX) project I run, long-term biomass change from the ~10,000 plot SEOSAW network of southern African savanna/woodland plots, and new tree biodiversity data from central Africa. I will show that there is a way forward, and that these new datasets will be hugely valuable and can genuinely increase prediction accuracy. But error propagation models and truly independent test datasets will be more important than ever.
Posidonia oceanica is a seagrass species endemic to the Mediterranean Sea. It is one of the main sources of oxygen to the sea and is considered to be a good bioindicator of the quality of the water. It also provides a habitat for aquatic life. This seagrass is found in shallow regions, close to the shore, therefore strongly affected by human activities. 1sq.m of the seagrass needs more than 100 years to form, hence, it is crucial to avoid damage and restore its meadows. Recently, the satellite remote sensing approach is used to map this meadow, however, to a limited extent. Here we present a novel method of large-scale mapping of Posidonia oceanica from satellite images on the basis of a deep learning approach. The technique requires atmospherically-corrected red-green-blue composite images, bathymetry, and in situ data to train the network. The technique is applied for the generation of high-resolution Posidonia oceanica cartography for the Balearic Islands in Spain. The result showed a mean accuracy of 98.5%, with 94% of pixels correctly classified as non-Posidonia and 4.5% correctly classified as Posidonia. Our model can automatically reproduce in detail the shape of the seagrass meadows at 10m spatial resolution with a sensitivity of 84% for Posidonia pixels. Our solution to map and monitor the Posidonia oceanica meadows in the Mediterranean Sea is called SIMBAD (Sentinel Imagery Multiband Analysis and Dissemination) – A scientific exploitation platform to protect earth’s ecosystems from space. SIMBAD is incubated by the European Space Agency Business Incubation Center (ESA BIC) Comunidad de Madrid Region in 2018 and is specifically designed to map this meadow in the Mediterranean Sea. This work can escalate monitoring capabilities of the Posidonia oceanica meadows and allow implementation of the restoration programs efficiently. This can also be a seed to do evolution studies, carbon stock calculation, and correlating the meadows with water quality parameters, pressure from boat anchoring, and physical oceanography.
Kelps are autotrophic ecosystem engineers in the Atlantic temperate coastal environments, and play critically important roles from a both an ecological and climate cycle perspective. The many species of the Order Laminariales colonise a diversity of littoral environments and provide an important underwater habitat to hundreds to thousands of species of invertebrates, fishes, and other algae. Kelp have also been harvested historically for the production of fertiliser and their commercial potential in areas such as the food, textile and pharmaceutical industries has become an active area of investment in the marine sector. Only recently however have these kelp forests been acknowledged as playing an important role in the carbon cycle of the oceans, with several studies demonstrating that collectively they absorb significant amounts of atmospheric carbon dioxide. This new insight is offset by concern that this oceanic natural resource is under threat due to the effects of climate change and widespread unsustainable harvesting practices. Regular in situ monitoring of kelp forests in terms of estimating their total biomass and quantifying their spatial extent over time is a time consuming, costly and resource intensive activity, with only limited spatial regions capable of being studied at any one time, with these activities being strongly dependent on national funding constraints and priorities along those Atlantic coasts harbouring these kelp ecosystems. The availability of regularly sampled Earth Observation data offers great potential in being able to implement such monitoring programmes from orbit. Previous studies have demonstrated the capability of Landsat data to characterise kelp forests in the clear waters off the California, South African and Australian coasts, and the availability of both Sentinel and higher cadence `smallsat' constellation data provide significantly greater opportunities to obtain clear-sky images of kelp biomass concentrations in temperate/cold-temperate coastal areas of the Atlantic. Practical realisation of such a remote sensing monitoring capability necessitates the inclusion of ways of dealing with the enhanced turbidity of these waters, in addition to compiling suitable ground truth data to underpin any extrapolations derived from orbital data, as well as ways and means of automating the acquisition, analysis and communication of such monitoring data products to scientists, regulatory authorities, industry and the wider public, and in so doing better understand and manage this invaluable Atlantic natural resource. In this contribution we describe how such an approach is being implemented as part of a recently funded project by the Irish Government's Environmental Protection Agency to study the diversity and resilience of kelp ecosystems off the Irish coast.
The eutrophication caused by nutrient loading to coastal waters is a major issue around the world. Managing the eutrophication process in the coastal waters requires monitoring the concentrations of the nutrients. Nitrogen and phosphorus are also one of the indicators of the ecological state of the waterbody. Nutrients cannot be mapped directly with remote sensing as they do not affect the water colour detected by remote sensing sensors. However, concentrations of nutrients may be in correlation with water constituents that change the water colour and this may allow remote sensing mapping of nutrients. In the current study we used Sentinel-3 OLCI data to derive total nitrogen (TN) and total phosphorus (TP) in Pärnu Bay (Estonia) over 6-year period. We had 97 TP and 87 TN in situ measurements which were collected at the same day with Sentinel-3 OLCI cloud free image acquisitions. We tested over 25,000 different 2 or 3 band ratio options in retrieving nutrient concentrations. The best performing algorithms for estimating TN showed relatively good results, R2 = 0.66, however TP results were more moderate, R2 = 0.30, with mean absolute error 20% for both nutrients. The best algorithms were using reflectances on three different wavelengths: 665 nm, 674 nm, and 865 nm for TN; and 442.5 nm, 674 nm, and 885 nm for TP. To study the temporal variability of the TN and TP in Pärnu Bay we used the best algorithms and produced TN and TP maps for 2016-2021 period. The maps demonstrate large variability in the Pärnu Bay and significant TN and TP input from the Pärnu River. The regular national monitoring, beginning from 1993, shows decreasing trends in TN and TP concentrations. We have compared the yearly averages from some national monitoring program stations with the satellite products. Our results agree well with the national monitoring program results. The good results of deriving TN and TP concentrations in the Pärnu Bay may be because by the high concentration of nutrients arriving from the Pärnu River and strong gradient towards more open parts of the bay. The Pärnu River water is dark brown (highly CDOM-rich). Thus, the good performance of the TN algorithms is most probably related to the CDOM and its distribution in the bay.
MONITORING WATER HYACINTH IN KUTTANAD, INDIA USING SENTINEL-1 SAR DATA
Morgan Simpson1, Armando Marino1, Vahid Akbari1, G. Nagendra Prabhu2, Deepayan Bhowmik1, Srikanth Rupavatharam3, Aviraj Datta3, Adam Kleczkowski4, J. Alice R. P. Sujeetha5, Savitri Maharaj1
1Faculty of Natural Sciences, University of Stirling, Stirling, UK; 2Sanatana Dharma College, Alleppey, Kerala, India; 3International Crops Research Institute for the Semi-Arid Tropics, Hyderabad, India; 4Mathematics and Statistics, University of Strathclyde, Glasgow, UK; 5National Institute of Plant Health Management, Hyderabad, India
1. INTRODUCTION
Waterways are important for India, where they have been utilised for commodity transport, local conveyance, irrigation, drainage, flood mitigation and as drinking water source. Water Hyacinth (Eichhornia crassipes) is a highly invasive aquatic plant species, indigenous to Amazonia, Brazil and tropical South America. First introduced as an ornamental plant in 1896 in Botanical Garden, Shibpur, West Bengal, India [1], over the years it infested freshwater bodies as an invasive weed species throughout the country across various agro-climatic conditions. International Union for Conservation of Nature’s (IUCN) has identified Water Hyacinth among the most dangerous invasive species in the world as it is very difficult to eliminate from a water body and has significant adverse socio-economic repercussions [2].
The weed is characterised by its rapid dispersal, growth and reproductive capabilities and the infestation has major environmental and socio-economic impacts. The physical removal of the weed normally involves manual removal through harvesting and in-situ cutting of the plant. The installation of surface screens or barriers to arrest the weed mat for cutting can make this process easier [3], however, physical methods are labour-intensive and suboptimal or impossible for removal within large catchment areas and early detection is necessary to begin removal processes as soon as possible.
Recently, there has been increasing use of UAVs and small drone aircraft for the monitoring of aquatic environments [4]. To gain a more synoptic coverage, satellite remote sensing can be employed. However, optical satellite data is not always available with cloud cover. This is a strong limitation when we want to achieve a prompt alert system that can detect the infesting weed at early occurrences. Synthetic Aperture Radar (SAR) can help with this due to the capability to monitor in all-weathers, day- or night-time. The scattering processes of SAR allow mapping of marsh, surface waters and forest to be determined from volume-, double-bounce and surface scattering [5] and this work is showing the possibility to use radar data to detect water hyacinth using satellite SAR with a focus on our test site in India.
2. METHODOLOGY
2.1. Study Area
Kuttanad, Kerala is a paddy-rich region in south-west India. About two-thirds of the land area is covered with wetlands with an area size of about 875km2. The department of agriculture, Government of Kerala, has reported intensive fertilisers usage by local farmers in the Kuttanad region [6]. This has resulted in an increase in water hyacinth found within the major lakes of the region. Due to the presence of water hyacinth within the region’s waterways, impacts have been felt on fisheries, drinking water, irrigation, transport and recreational use of the water bodies. This study focuses on the Vembanad Lake, the largest Ramsar site in Kerala, India. We could acquire validation data for this location by gathering photographic evidence of the infestation.
2.2. Satellite Data
Courtesy of the European Space Agency (ESA) Copernicus programme, dual-polarimetric Sentinel-1 SAR data were obtained. The mode of acquisition is Interferometric Wide Swath Single Look Complex (SLC). The spatial resolution of the SAR images is approximately 20x5 (for SLC) with a temporal resolution of up to 7 days on Vembanad Lake (12 days using a single orbit).
2.3. Physical model
Our working hypothesis is that water hyacinth infestation alters the scattering by increasing the roughness of the lake surface and by creating a layer of volume above the water. This should be distinguishable in the satellite image as spots or patches with high brightness. Calm waters of a lake will scatter most of the electromagnetic radiation in the specular direction and therefore will appear darker; but the presence of water hyacinth mat on the water surface will produce a rougher surface (which lead to more backscattering) and some scattering from part of the plants distributed over the vertical direction (volume scattering). It has to be said that we expect this effect from any macrophyte growing with a significant hight over the top of the water. Therefore, we believe we are detecting macrophytes and not just water hyacinth.
2.4. Data Analysis
Inspecting Sentinel-1 images over the lake for multiple dates we observed that there was a clear backscatter difference between clear water and a scatterer floating on the water surface. To test the detectability of such patches, we initially performed a data analysis visualizing histograms and extracting synthetic statistics for pixels belonging to the vegetation patches and clean water. Following this we exploited a range of single-pol and dual-pol detectors. Specifically, we tested a) simple thresholds on VV and HV intensities, change detection using b) single intensities, c) optimisation of power ratio [7], optimisation of power difference [8-9] and Hotelling-Lawley trace [10]. We used Receiver Operating Characteristic (ROC) curves to assess the performance of each detector.
A time-series analysis was conducted to analyse the occurrence rates of water hyacinth at the locations mentioned above. From this, a heat map of the water hyacinth is created to highlight the locations in Vembanad Lake where the plant accumulation was most commonly occurring.
3. RESULTS AND DISCUSSIONS
Histograms of the pixel intensity values from dates of clean and ‘infested’ water has shown clear differences with expected separability. The ROC shows that the optimisation of power difference, optimisation of power ratio and Hotelling-Lawley trace provide the best performances. We validated this using photographic evidences, therefore we know that the macrophytes under questions is water hyacinth. Overall, the results indicate that change detection systems using SAR data can identify higher-density water hyacinth with accuracies varying from 90 – 98% depending on false alarms rate. Additionally, when we analyse patches with lower-density water hyacinth within Vembanad Lake the accuracies vary from 35% to 40% depending on constraint on false alarms rate.
The hot map of water hyacinth is very useful to understand where the weed is expected to grow and therefore plan intervention.
We can conclude that based on our validation we can detect water hyacinth with adequate accuracy. As a future work we want to collect more ground data to evaluate if we can quantify the thickness of the water hyacinth matt. Also we want to evaluate if there are difference in polarimetric backscattering between water hyacinth and other macrophytes to attempt some classification.
Acknowledgement: This work was funded by a UK RAEng GCRF grant (FF/1920/1/37).
5. REFERENCES
[1] V. Naidu, A. Deriya, S. Naik, S. Paroha, and P. Khankhane, “Water use efficiency and phytoremediation potential of water hyacinth under elevated CO2,” Indian Journal of Weed Science, vol. 46, no. 3, pp. 274–277, 2014.
[2] T. Tellez, E. L ´ opez, G. Granado, E. P ´ erez, R. L ´ opez, and ´ J. Guzman, “The water hyacinth, Eichhornia crassipes: ´ an invasive plant in the Guadiana River Basin (Spain),” Aquatic Invasions, vol. 3, no. 1, pp. 42–53, 2008.
[3] U. Uka, K. Chukwuka, and F. Daddy, “Water hyacinth infestation and management in nigeria inland waters: a review,” Plant Sci, vol. 2, pp. 480–488, 2007.
[4] Chabot D. and Bird D.M. (2013). “Small unmanned aircraft: precise and convenient new tools for surveying wetlands,” Journal of Unmanned Vehicle Systems, Vol 1, pp. 15 – 24.
[5] B. Brisco, “Mapping and monitoring surface water and wetlands with synthetic aperture radar,” Remote Sensing of Wetlands: Applications and Adv., pp. 119–136, 2015.
[6] M. Kumari, S. Syamaprasad, and S. Das, “Inland waterway as an alternative and sustainable transport in kuttanad region of kerala, india,” in Adv. in Water Resources Engineering and Management, 2020, pp. 245–257
[7] Armando Marino & Irena Hajnsek, A Change Detector Based on an Optimization with Polarimetric SAR imagery, IEEE Transactions on Geosciences and Remote Sensing, 52(8), 4781-4798, 2014.
[8] Marino, A. & Alonso-Gonzalez, A. (2018). Optimisations for Different Change Models with Polarimetric SAR. EUSAR 2018, 12th European Conference on Synthetic Aperture Radar, Aechen, Germany.
[9] Emanuele Ferrentino, Armando Marino, Ferdinando Nunziata, Maurizio Migliaccio, A dual polarimetric approach to earthquake damage assessment, International Journal of Remote Sensing, 2019
[10] Akbari, V., Anfinsen, S.N., Doulgeris, A.P., Eltoft, T. (2016). A Change Detector for Polarimetric SAR Data Based on the Relaxed Wishart Distribution. 2015 IEEE International Geoscience and Remote Sensing Symposium.
Natural ecosystems are increasingly confronted with environmental changes such as climate change, natural disasters or anthropogenic disturbances. Prolonged droughts, heat waves and increasing aridity are generally considered major consequences of ongoing global climate change and are expected to produce widespread changes in key ecosystem attributes, functions and dynamics. Another possible consequence of climate change is the progressive enlargement of global drylands as well as dryland-like conditions and mechanisms gaining importance in more humid areas. Drylands collectively account for ~41% of earth's surface and are predominantly located in developing countries where they host nearly 2 billion people whose livelihood depends on the services these ecosystems provide. Moreover, their natural limitation in water and natural resources make drylands especially vulnerable to potential adverse consequences of climate change. While we have a good understanding of past aridity trends, knowledge about temporal dynamics, responses and resistance of drylands to increasing aridity, particularly in terms of their composition, structure, functioning, and soil properties, are largely unknown.
Focusing on the climatological aspects of aridity [calculated as 1 – (precipitation/potential evapotranspiration)], we investigate the response and resistance of several structural and functional ecosystem variables to changes in aridity between 1981 and 2019 at global scale. We specify regional differences in aridity trends with strongest drying observed in the tropical and temperate biomes of South America, Europe and East Africa. To assess the spatio-temporal variability of aridity, we identified and clustered the dominant patterns of temporal oscillations in aridity. These clusters are subsequently used to study the response of a range of ecosystem variables to observed aridity changes. In particular, the temporal dynamics of remotely sensed metrics relating to vegetation cover, productivity, functioning, composition, biomass, climate and soil quality were analyzed. The results will provide insights about response patterns of these variables including possible dependencies on ecosystem properties and land cover. This will also allow us to identify areas that are more or less resistant to observed changes in aridity and eventually form a scientific basis for targeted management and adaptation strategies.
Analysis of deforestation and degradation in Xingu National Park in the Brazilian Amazon using geotechnologies
Kate Booth1, Polyanna da C. Bispo1, Jonny Huck1, Ricardo Dalagnol2
1MCGIS, Department of Geography, School of Environment Education and Development, University of Manchester, Manchester, UK.
2Earth Observation and Geoinformatics Division, National Institute for Space Research-INPE, São José dos Campos, SP 12227-010, Brazil
Deforestation and forest degradation in the Amazon rainforest have increased intensely in the last decade. Fires, gold mining, and illegal logging in indigenous lands are all contributing to a continued increase in the rate of forest loss and degradation. Given the pervasive global understanding of the importance of this issue, and recent international commitments to end deforestation in the Amazon by 2030, it is vital that we can minimise uncertainties in the ways in which we measure and monitor such activities, in order that the remaining rainforest can be protected effectively. Tackling these problems is essential to support decision making and policy developments, especially in the context of environmental justice and the resolutions from COP26. Emergent high-resolution imagery products and methodological innovations show promise for improved analysis of spatio-temporal patterns of forest loss and degradation; but we do not yet understand the impact that this might have on our ability to monitor (and so mitigate) deforestation and degradation in a specific context of indigenous protected and surrounding areas. New approaches in spatial analyses and machine learning can play an important role in this goal. Our study proposes to evaluate new geospatial products and high-resolution satellite imagery to understand spatio-temporal patterns in different types of forest degradation and deforestation in these areas. These analyses will give us insights to strategically and accurately map forest composition, structure and landscape.
This research will be carried out with a focus on indigenous lands, specifically the oldest indigenous protected area in Brazil, Xingu National Park, which will be our main case study. This area is yet one of the most protected forests that overlap Mato Grosso and Pará states, but it is being increasingly threated by human induced disturbances. We will do an intercomparison between existing deforestation and forest degradation products such as MapBiomas land cover maps and Tropical Moist Forests (TMF/JRC) datasets based on 30-m Landsat historical data, as well as interpretation of higher resolution imagery such as from Sentinel-2 (10-m) and Planet NICFI (5-m) missions for recent years (2016 and onwards). This will allow for an in-depth analysis of the changes in forest structure as a result of illegal fires, logging and gold mining, helping towards further monitoring of the forest’s resilience to the changes in its structure. The analysis will also focus on the indigenous communities and the processes they carry out in order to conserve and thrive off their land, comparing to the effects of illegal processes and provide further insight of the impact they have on the forest.
Key words: Deforestation, forest degradation, Amazon, indigenous lands, spatial analysis.
Drought events are expected to become more severe and frequent across many regions under continuing environmental changes. They disturb ecosystems and can potentially weaken the land carbon sink. Drought impacts differ among ecosystems depending on the ability of ecosystems to maintain their functioning during droughts, i.e., the ecosystem resistance. Different plant species and types have diverse strategies to cope with drought, so that ecosystem responses of different land cover types have been found to be divergent for similar levels of drought severity (Bastos et al., 2020). However, it remains unclear how the ecosystem responds to different drought events, which might be related with confounding effects from different drought duration, seasonality, differences between climatic zones, etc. Here, we present preliminary results of a study evaluating vegetation resistance across different drought duration, severity and vegetation cover over the globe.
We used Vegetation Optical Depth (SMOS L-VOD) data from the SMOS low-frequency microwave satellite. SMOS L-VOD shows high potential to monitor vegetation structure and biomass (Wigneron et al., 2020). Compared to other indices, e.g., NDVI, EVI, C-VOD or X-VOD, SMOS L-VOD saturates less over dense tropical ecosystems and can be used to separate the effects of soil moisture and vegetation density more robustly compared to other products. In mid-latitudes, SMOS L-VOD is easily affected by radio frequency interference (RFI). To minimize the effects of RFI, SMOS L-VOD data have been filtered strictly and then reconstructed for the period 2010-2021 (Yang et al., in prep). This new dataset further reduces the uncertainties due to RFI and separates long-term trends in biomass and seasonality in vegetation water content more precisely.
We used this new SMOS L-VOD data set as an indicator of aboveground biomass. We defined drought events based on the percentiles of the pixel-level probability distribution of soil moisture anomalies from ERA5 reanalysis. We characterized ecosystem resistance by the ratio of VOD values over the drought year and the previous year, using different methods to characterize uncertainties in the VOD signal. We analyzed how ecosystem resistance varies with land cover across the globe, using three mainstream land cover products to account for uncertainties due to different classification methods.
Bastos, A., Fu, Z., Ciais, P., Friedlingstein, P., Sitch, S., Pongratz, J., Weber, U., Reichstein, M., Anthoni, P., Arneth, A., Haverd, V., Jain, A., Joetzjer, E., Knauer, J., Lienert, S., Loughran, T., McGuire, P. C., Obermeier, W., Padrón, R. S., Shi, H., Tian, H., Viovy, N., and Zaehle, S.: Impacts of extreme summers on European ecosystems: a comparative analysis of 2003, 2010 and 2018, Phil. Trans. R. Soc. B, 375, 20190507, https://doi.org/10.1098/rstb.2019.0507, 2020.
Wigneron, J.-P., Fan, L., Ciais, P., Bastos, A., Brandt, M., Chave, J., Saatchi, S., Baccini, A., and Fensholt, R.: Tropical forests did not recover from the strong 2015–2016 El Niño event, Science Advances 6, eaay4603, https://doi.org/10.1126/sciadv.aay4603, 2020.
Understanding ecosystem memory of climate induced variability is key to quantify their susceptibility to e.g. droughts and other extremes. A common approach to investigate memory is through assuming a stationary mean seasonal cycle and analyzing the autocorrelation or long-range correlations in the residual variability, i.e. the anomalies. This key starting point of nearly all time series analysis frameworks is often handled as a trivial step, and its underlying assumptions rarely questioned.
However, only when the dominant mode of variability is induced by a stationary cyclic driver (like radiation) the conventional approach is justified. If the temporal dynamics determining the phenology of an ecosystem are modulated in phase and/or amplitude, such that an ecosystem does not exhibit an invariant seasonal cycle, then the estimated anomalies must inherit fractions of these signals and any subsequent interpretation is biased. In particular, ecosystem memory, or response to extremeness must be overestimated, e. g. in arid ecosystems where dominant drivers of ecosystem state are often not stationary or cyclic.
In this paper, we investigate ecosystem memory effects (autocorrelation) for several climatological and ecological variables such as Gross Primary Productivity (GPP) or soil moisture (SM) at the global scale using the Earth System Data Lab. We compare two methods: First, we use the traditional approach, relying on the subtraction of the mean annual cycle (MAC) to extract the anomalies. Second, we use of a data adaptive time series decomposition method, singular spectrum analysis (SSA), to extract the residual signal of interest.
Our results based on an artificial experiment clearly demonstrate the biases induced by the MAC approach in comparison to SSA: Prescribed levels of autocorrelation are dramatically over/underestimated in the presence of a modulated signal. Applied to GPP and SM at the global scale with the Earth System Data Lab framework (https://www.earthsystemdatalab.net/), the autocorrelation in the anomalies was found to be sustainably overestimated using MAC, especially in dry ecosystems. The e-folding time of the autocorrelation function (Tau) was found to be higher by up to 40 days when compared to the SSA output. Both approaches yield similar outputs in the presence of a distinct annual cycle (temperate / Boreal climates).
The demonstrated overestimation of ecosystem memory by traditional approaches shows the importance of using more adaptive techniques for anomaly extraction, especially for global studies and/or regions where the assumptions of the traditional cyclic approaches do not hold. We find that adaptive approaches for anomaly extraction prove to be more efficient in identifying and analyzing climate extremes, namely exceptional droughts, in arid ecosystems that are already drought prone. The presented approach could thus improve the detection and attribution of exceptional extreme events to climate change.
The Indus, Ganges, Brahmaputra, and Meghna river basins are highly depending on water resources from High Mountain Asia and monsoon rainfall. These rivers provide about 1.2 billion people with water and their water resources are indispensable for many sectors including the world’s largest connected irrigated cropland and further domestic needs. In the context of amplified global climate change, these river basins are facing increasing environmental and human pressure. Therefore, the monitoring and assessment of environmental change and driving factors is of high interest to improve our understanding of complex interplays between multiple spheres, including the biosphere, hydrosphere, and cryosphere.
In this study, we investigate land surface dynamics and controlling variables by means of multivariate time series over the last two decades (2000-2020) covering climatic, hydrological, as well as Earth observation (EO)-based land surface and anthropogenic variables. More specifically, the feature space consists of geoscientific time series including MODIS Normalized Difference Vegetation Index (NDVI), DLR Global WaterPack, DLR Global SnowPack, DLR World Settlement Footprint Evolution, ESA CCI Land Cover, WorldPop gridded population counts, as well as climatic variables extracted from the ERA5-Land reanalysis data suite and hydrological variables such as GloFAS-ERA5 gridded river discharge and ITSG-GRACE terrestrial water storage anomaly. All time series, except the anthropogenic variables, are temporally aggregated at biweekly intervals. The anthropogenic variables are characterized by an annual temporal resolution. To enable joint exploitation of these multivariate time series variables, we developed a methodological framework for both, the processing of multisource data streams to create a unified feature space and the application of statistical time series analysis techniques to quantify land surface dynamics and controlling variables. The statistical time series analyses include retrieval of trends, changes in seasonality, and evaluation of drivers using the recently proposed causal discovery algorithm Peter and Clark Momentary Conditional Independence (PCMCI).
The preliminary findings of this study show increasing trends in vegetation greenness being accelerated during the second decade. Particularly over the Indo-Gangetic Plains, irrigated croplands appear to contribute most to this significant positive trend in NDVI. Moreover, our results reveal a mismatch between trends in surface water area and terrestrial water storage. While our analysis implies significant positive trends for surface water area at river basin scale, trends in terrestrial water storage anomaly indicate a significant decline. With respect to snow cover area, the resulting trends remain mostly non-significant at river basin and annual temporal scale. However, more detailed analyses at grid and seasonal temporal scale indicate significant negative trends in parts of the Upper Ganges and Brahmaputra river basin. Furthermore, the causal analysis unravelled so far unexplored direct and indirect interactions among EO-based land surface and climatic variables at seasonal scale. For example, we found that vegetation greenness is largely controlled by water availability through the soil and atmosphere with spatial variations over the seasons. Regarding surface water area, we determined a strong positive coupling with river discharge in the downstream basins, whereas temperature and snow cover area are the dominant variables in high altitude areas. Considering drivers of changes in snow cover area, temperature and precipitation are the most important factors. While precipitation has positive influence during the winter and pre-monsoon season, particularly over the Upper Indus river basin, the negative influence of temperature dominates in the monsoon season.
To summarize, the findings of this study greatly contribute to a better understanding of land surface dynamics and drivers in the investigated river basins in South Asia. Additionally, the developed methodological framework enables the exploration of multivariate time series and provides insights into the evaluation of environmental change and controlling variables over any other large river basin.
Among the different aspects of current and projected climate change, the increased frequency of extreme events such as droughts is considered to be a major factor potentially affecting temperate forest ecosystems. However, climatic, topographic, stand-specific, and other environmental conditions may either individually or through interaction control the impacts of extreme events. The complexity of these interactions requires complementary perspectives to deepen knowledge and better understand trajectories of forest ecosystem in the context environmental change. In our work we provided a satellite perspective on this problem and used the Normalised Difference Water Index (NDWI) as a proxy for canopy water content and measured the impact of the extreme summer 2018 drought on Swiss forests. We used the relative changes of NDWI in Sentinel-2 satellite imagery from 2017–2019 as measure of resistance, recovery, and resilience of 10 x 10 m forest pixels. Using simple linear mixed models, we found lower resistance to drought, but stronger recovery along the forest edges, on south-oriented slopes, and at low elevations, drought resilience was stronger for broad-leaved than for coniferous trees. Interactions between environmental variables and their effect on forests were weak.
This first inventory of forest responses to extreme climatic events was complemented with a detailed assessment of the spatial consistency of functional relationships between drought responses and environmental factors. We used generalised additive models to evaluate possible non-linear relationships between drought responses and topographic as well as forest-stand characteristics. First preliminary results indicate a strong, non-linear impact of topographic wetness on drought response. Positive effects are increasing with higher topographic wetness before they rapidly decrease again towards the highest potential water availabilities along rivers and lakes. We also found a positive effect of tree diversity on drought resilience although the partial impact is smaller than that of topographic wetness and the optimum in terms of forest composition shifts among different regions. These results can help to detect causes for delayed recovery and lack of short-term resilience, resulting in longer-term legacy effects of the 2018 drought.
While mangroves occupy less than 1% of the tropical forest area, carbon sequestration in these coastal forests accounts for about 14% of the global carbon amount captured by oceans (Alongi 2012). It is thus crucial to monitor carbon sequestration in mangroves in view of anticipating and reducing gas emission resulting from coastal disturbance induced by both global warming and anthropogenic activities.
Because mangrove are complex assemblages of diversified tropical vegetation structures and species, sometimes with high biomass, whose extent can dramatically but naturally vary year by year, monitoring carbon stocks in mangroves challenges the development of robust and multiscale remote sensing methods for two reasons. First, even if radar, optical and Lidar remote sensing of mangroves can provide above-ground biomass or carbon maps with a certain amount of accuracy, further research is needed for a better physical interpretation of the scattering of microwave signals (Proisy et al. 2000), sunlight (Viennois et al. 2016) or laser pulses within mangrove environments. Besides, in mangroves, below-ground carbon stock is likely to be high and cannot be directly considered in remote sensing studies without allometric equations. The second reason is that the rise or fall of mangrove carbon stocks is intimately linked to hydro-sedimentary processes responsible for erosion, silting and flooding over thousands of square kilometers worldwide. Carbon stocks in mangroves are particularly dependent on sediment supply to the coast. Without sediment replenishment, mangroves can no longer develop or resist erosion by waves or sea level increase. All this constitutes an ambitious research framework towards biomass and carbon monitoring along mangrove coasts to which space-based missions can significantly contribute provided an integrative strategy for monitoring carbon sequestration from mangrove coasts is designed.
Within the frame of the ROMANCE (Role of mangroves in carbon, water and energy cycles) project granted by CNES, TOSCA program, our focus is on French Guiana mangroves. Above-ground biomass can reach 500 tons of dry matter per hectare in this region while coastal instability is one of the most extreme worldwide: here, mangroves can be eroded with annual erosion rates reaching 500 m in a given place and, at the same time, in another place, mangrove can be established over square kilometers in a few months. These vegetation extent fluctuations are indicators of coastal processes in action and French Guiana mangroves can be seen as a natural early warning system of erosion by waves that may help in predicting southeast to northwest shifting of mud accretion or erosion phases (Proisy et al. 2020).
In this work, we will illustrate how times series of Sentinel-1 and Sentinel-2 images can be amalgamated into a modeling strategy to operationally monitor fluctuations of mangrove extent inside which biomass and carbon stocks of mangrove stands can be estimated and monitored using high resolution imagery (Proisy et al. 2007) and empirical equations relating forest stand age to complete, below—and above— ground, carbon stocks estimates (Walcker et al. 2018). We will also demonstrate how the simulation of mangrove shorelines fluctuations performed by the MANG@COAST modeling approach based on oceanic wave and current data (Proisy et al. 2016) can support testing of a new regional model of carbon fluxes at the mangrove land-sea interface, controlled by alternating mud erosion and accretion phases. We will then discuss how our modeling experience can be coupled with individual-based mangrove models (Berger et al. 2008) and the land surface model ORCHIDEE with potential applications to different mangrove coastal settings. Overall, this case study is illustrative of how forest biomass monitoring is linked to issues on climate change mitigation and territory management.
References
Alongi, D.M. (2012). Carbon sequestration in mangrove forests. Carbon Management, 3, 313-322 https://doi.org/10.4155/cmt.12.20
Berger et al. (2008). Advances and limitations of individual-based models to analyze and predict dynamics of mangrove forests: A review. Aquatic Botany, 89, 260-274 https://doi.org/10.1016/j.aquabot.2007.12.015
Proisy et al. (2007). Predicting and mapping mangrove biomass from canopy grain analysis using Fourier-based textural ordination of IKONOS images. Remote Sensing of Environment, 109, 379-392 https://doi.org/10.1016/j.rse.2007.01.009
Proisy et al. (2016). A multiscale simulation approach for linking mangrove dynamics to coastal processes using remote sensing observations. Journal of Coastal Research, Special Issue no. 75, 810-814 https://doi.org/10.2112/SI75-163.1
Proisy et al. (2000). Interpretation of polarimetric radar signatures of mangrove forests. Remote Sensing of Environment, 71, 56-66 https://doi.org/10.1016/S0034-4257(99)00064-4
Proisy et al. (2020). Mangroves: a natural early warning system of erosion on open muddy coasts in French Guiana. In D. Friess & F. Sidik (Eds.), Dynamic Sedimentary Environment of Mangrove Coasts (pp. 47-63): Elsevier https://doi.org/10.1016/B978-0-12-816437-2.00011-2
Viennois et aL. (2016). Multitemporal analysis of high spatial resolution satellite imagery for mangrove species mapping, Bali, Indonesia. Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9, 3680-3686 http://dx.doi.org/10.1109/JSTARS.2016.2553170
Walcker et al. (2018). Control of “blue carbon” storage by mangrove ageing: Evidence from a 66-year chronosequence in French Guiana. Global Change Biology, 24, 2325-2338 https://doi.org/10.1111/gcb.14100
During the last years an increasing attention to the assesment of climate change impact on water bodies was paid. Previous researches have been mainly focused on large lakes around the world, for which long-term satellite data, models and in-situ data, where available, were combined in order to analyze the water surface temperature (WST) evolution over time. In this work the analysis of climate change effects on surface temperature for two medium-small Italian lakes, using satellite data time-series, is addressed. We considered only products with clear sky conditions over the areas of interest and we exploited a simplified Planck's law based algorithm in order to derive the lakes surface temperature trends.
The considered test sites are Bracciano Lake and Martignano Lake, which are two water bodies close each other with different morphometric characteristics, located North of Rome (Italy). The former has a surface of 57.5 km2 (maximum depth of 160 m), while the latter’s surface is 2.5 km2 (maximum depth is 54 m). The selected lakes, as in most of medium-small water bodies around the world, have not a proper network for ground-based measurement of crucial parameters, such as the water temperature. Satellite data allow filling in this lack and, in particular, the products provided by the Landsat program (available for approximately 40 years) have the proper spatial resolution to enable a long-term monitoring even for small lakes.
We proposed a linear regression based method for extending the WST analysis back in time to 1984, because the assumed reference algorithm [1] uses a NASA tool that does not provide parameters for atmospheric correction of Landsat products acquired before 2000. Using a linear regression approach, for each Landsat platform (Landsat-5/7/8), we derived the spectral radiance measured at the bottom of atmosphere (BOA) from the corresponding radiance at the top of atmosphere (TOA). Comparing the WST estimates for both lakes retrieved with the proposed approach against the ones computed through the reference algorithm, we obtained an encouraging high correlation (R2 = 0.99) for the period 2000 – 2019.
Almost 500 Landsat data were collected in a dataset covering the period 1984-2019. From visual inspection an amount of 363 clear sky products were selected; finally, we grouped all the satellite products acquired during the same month for computing monthly averaged WST estimates.
The obtained WST time series shown a general rising trend for both the considered lakes, which is more affecting the smallest and shallowest between the two test sites. In particular from 1984 to 2019, for Bracciano Lake the WST has been increased about +0.049 °C/y on average, while for Martignano Lake the WST growth rate has been of +0.058 °C/y on average. Moreover, it was observed that, since 2000, the surface of the both lakes has been warming of about 0.106 °C/year on average, which doubles the rate that can be retrieved considering the whole period 1984 – 2019.
The water surface temperature estimates resulted highly consistent with in-situ data used for validation. Furthermore, in clear sky condition, the comparison between the WST values computed with the proposed approach with Sentinel-3 SLSTR Level 2 products of the same date shown a significant agreement.
[1] Virdis S.G.P, Soodcharoen N., Lugliè A., Padedda B.M., Estimation of satellite-derived lake water surface temperatures in the western Mediterranean: Integrating multi-source, multi-resolution imagery and a long-term field dataset using a time series approach, Science of The Total Environment, Volume 707, 2020, 135567, ISSN 0048-9697, https://doi.org/10.1016/j.scitotenv.2019.135567.
In 2020, the Middle East and northern Africa suffered one of the strongest locust outbreaks in several decades. Very locust-favorable weather conditions were caused by a strong rainy season which resulted in two to three times higher rainfall amounts compared to the long-term seasonal average. Large swarms of locust were able to breed throughout the Middle East and Eastern Africa and spread throughout major parts of Africa in the beginning of the year. The outbreak was especially severe in Ethiopia, Kenya and Somalia. These eastern African countries are highly dependent on mostly rain-fed agriculture to ensure food security. Therefore, widespread locust damage can cause immediate danger to livelihoods. To better understand the outbreak and its extent, the presented study focused on the three countries to analyze the potential of earth observation data for mapping areas damaged by the locust swarms on a national as well as on a sub-national scale. On the sub-national scale, a hotspot area was analyzed using high resolution Sentinel-2 satellite data extracted via Google Earth Engine, whereas the national level approach leveraged the use of medium resolution MODIS data to achieve results on a larger area. Both scales relied on the use of free and open data. By using time series analyses in combination with a breakpoint detection algorithm, drops in the timelines of the Normalized Difference Vegetation Index (NDVI) were located for each pixel in the study area. Based on the timing of the NDVI trend shifts, we extracted environmental parameters which are known to be relevant for locust habitability. This was done for multiple timesteps: One, two, as well as three months’ time periods before the observation of the downward shift of the vegetation index. The parameters selected based on a literature review include land surface temperature, rainfall and soil moisture. In addition, terrain (slope, elevation) and land cover data were included as stand-alone layers, not based on breakpoint timing. Each of the resulting data layers covering relevant environmental variables was then used as input data for a machine-learning model (Random Forest) to determine whether the drop in satellite-observed biomass was caused by locust swarms. In-situ reports of swarm presence provided by the Food and Agricultural Organization of the United Nations (FAO) were used as training- and validation data. Our study shows that the DBEST trend shift detection algorithm was well suited to detect intra-annual declines of NDVI within the 2020 timeseries. Results on the national scale showed an overall accuracy of about 90% for the Random Forest model. Hotspots of locust activity were identified in southern and north-western parts of Ethiopia as well as large parts of Kenya despite data-gaps caused by intermittent cloud cover. A comparison with written FAO reports on the state of swarm activity throughout the outbreak showed a high alignment with the locations of the calculated hotspots, extents, as well as breakpoint timings. Slight overestimations of damaged areas were observed in Kenya, possibly caused by a higher amount of missing data. The results of the high-resolution sub-national analysis showed an overestimation of locust activity. On both scales, rainfall data, especially the Standard Precipitation Index (SPI) with a 3-month window, were found to be a determinant factor for locust swarm habitability. This could be linked to potential breeding grounds or the presence of thin-leafed vegetation which the insects eat throughout their development. While higher resolution datasets on rainfall and other environmental factors possibly could improve the model on the regional scale, our study could show the great potential of machine learning in combination with time series analysis for locust damage monitoring. These models can therefore be used already for ongoing but also predictive locust monitoring.
Climate change is estimated to alter the frequency of drought periods and their intensity. The magnitude of the effects on forests are largely related to ecosystem resilience to drought exposure and drought intensity. Lack of available water alters cellular features and processes that are vital for tree functioning. Monitoring impacts of drought on natural and managed forest ecosystems, and subsequent ecosystem disturbances, is hence, of high societal significance. Spatiotemporal heterogeneity of drought trends can impose challenges for current drought monitoring efforts. Satellites open new possibilities to monitor onset, severity, and termination of drought periods and their impacts.
During the summer of 2018, Northern Europe widely experienced record-breaking heatwaves and large soil moisture deficits. This impacted forest ecosystems strongly, contributing to increase in forest fires, pest outbreaks and potentially reduced productivity. Drought resilience of a forest ecosystem relates to species composition, community structure, and their capability to adapt to changes in earth’s physical systems and processes. Drought monitoring and impact quantification efforts can cover areas with multiple species and differing functional groups, which could respond to drought in varying ways. Recent advances in satellite data pre-processing, indices and change detection methods developed, show great potential to further develop methods to monitor spatiotemporal variation biogeophysical parameters of land surface. Simultaneously, a scarcity of studies focusing directly on impacts of drought, especially in forest areas of Northern Europe, remains.
Time series of Sentinel-2 data covering study area in Southern Sweden were used to compute multi-temporal moisture and vegetation indices. Data covering the summer months of 2018 is used to describe a drought scenario. Responses and trends between indices sensitive to vegetation water content and indices sensitive to green vegetation are evaluated, in terms of drought stress and possible lag-effect detection in varying forest types. The natural environment can be highly dynamic, and all models might not be suitable for describing these temporal changes. Thus, sequences of functions and methods will be evaluated, regarding their ability to detect gradual changes and abrupt breakpoints.
This study is currently ongoing as part of my PhD research, investigating performance of available change- and trend-detection methods to assess response of varying forest types to drought. To enable quantification of drought impacts with optical satellite data over varying forest types, suitable indices, and methods should be identified. Physiological functions of trees can respond to drought rapidly when changes in greenness and canopy structure likely respond in a matter of days or weeks. Thus, results are expected to show variation trends of indices sensitive to vegetation water content and indices sensitive to green vegetation, which could also reveal lag-effects related to tree drought stress. Variation in trends detected between different forest types, could indicate differences in ecosystem response and resilience against drought. In summary, this study aims to improve knowledge in selection of methods and how Nordic forest ecosystems respond to drought periods, which could aid in information need of forest management planning. Results achieved will be described, illustrated, and discussed.
Forests play a critical role in the global carbon cycle by sequestering carbon in the form of biomass. Tree planting and forest restoration have been lauded as solutions to combat climate change and criticized as ways for polluters to offset carbon emissions. To effectively manage forests towards climate mitigation requires consistent monitoring of forest dynamics. Especially for areas where various restoration strategies are applied to aid the recovery of degraded forests, it is crucial to have benchmark information on various stages of forest conditions to compare and evaluate the restoration strategies’ suitabilities for the given geographical region. Monitoring forest regrowth dynamics towards climate change mitigation requires an understanding of change in forest ecosystem structure, especially how forest aboveground biomass density (AGBD) regrow can help direct indication of carbon stocks in forests. The paucity of consistent historical measurements of forest structure and AGBD at regional scales tend to be a big challenge for establishing such an empirical understanding of forest regrowth to guide practitioners’ adoption of best forest restoration strategies for managing the restored forests. The launch of the Global Ecosystem Dynamics Investigation (GEDI) Lidar mission in late 2018 will help fill the critical carbon knowledge gap. During its mission time, GEDI collects forest structural information, such as tree height, canopy cover, portfolio lead index (PAI), and AGBD data, for tropical and temperate forests (between approximately 51°N and 51°S), at a relatively high spatial resolution of 25m. Moreover, long-term vegetation characteristics observed by multispectral satellite missions such as the Landsat (since 1972) and PlanetScope (since 2009) have great potential to be combined with GEDI data to reconstruct management trajectories and reveal forest recovery dynamics. In this study, a remote sensing data fusion method will be developed where current (2019-2021) AGBD estimates obtained from GEDI’s Level 4 product will be linked with vegetation spectral signals derived from Landsat (1985-2020) and PlanetScope data (2020) through empirical modeling at a regional scale. The GEDI-Landsat models then are used to predict annual biomass retrospectively. We then derive AGBD recovery trajectories for literature-derived restoration sites and compare the speed of AGBD recovery for three restoration methods (Assisted Natural Regeneration (ANR), Natural Regeneration (NR), and Active Restoration (AR)) during and after the restoration period across three major biomes in East Africa. Literature or other independent-derived AGBD change for each restoration site will be used to validate our estimates. By comparing how fast forests regrow and sequestrate carbon as aboveground biomass in sites restored with different strategies in different biomes, it will help infer the more effective solutions to restore forests in each biome, and by comparing the rate of sequestration during project implementation to that after project termination, the influence of formal designation or funding support in forest restorations can be highlighted. Overall, the study demonstrates an effort to advance remote sensing fusion for monitoring ecosystem restoration success.
Dactylorhiza majalis is an indicator species for the habitat quality of nutrient-poor grassland sites. Therefore, environmentalists utilize the species to validate the success of conservation efforts. Conventional monitoring approaches consist of labour-intensive field campaigns where plants are counted by hand or the population size is estimated by extrapolation from small samples. Furthermore, conventional monitoring approaches insufficiently account for the spatial distribution of the target species in a study site. In this study we propose a novel monitoring approach using multispectral drone-based remote sensing data with a very high spatial resolution (3 cm). The dataset was acquired during flowering phase of western marsh orchids in June 2021 in the area of the Lehmkuhlen reservoir in Schleswig-Holstein, Germany. The Lehmkuhlen reservoir is the most species-rich fen in the state Schleswig-Holstein. 60 plants found there are on the Red List of species threatened with extinction including the broad-leaved orchid.
The monitoring workflow consists of three main steps: (i) feature engineering, (ii) a binary random forest classification and (iii) abundance aggregation. As a part of the feature engineering (i) we developed the Magenta Vegetation Index (MaVI) to improve the differentiation of Dactylorhiza majalis and other land covers. We tested the importance of the MaVI as a predictor variable for the target species on several random forest models. The models were trained with balanced datasets of different sizes and feature constellations (mostly remote sensing vegetation indices). The MaVI proved to be the most important feature for all models. We chose the best performing model for the binary classification process (ii). The model successfully classified the drone dataset with high accuracies (Overall Accuracy: 97 %). Subsequently we prepared an UTM coordinate vector grid of the study site with a cell size of 1 square meter. We overlaid the classification result with the vector grid and calculated the individuals per square meter (iii). We tested the aggregated results against 10 in situ plant counts. Overall, the tests showed only minor differences in plant counts between remote sensing derived and in situ data with an average difference of 6 individuals and a maximum difference of 14 individuals. The results indicate that the proposed methodology can serve as a reliable alternative to conventional monitoring approaches and could aid to validate the success of site-specific conservation efforts. Additionally, the monitoring approach is able to account for the spatial distribution of broad-leaved orchids in the study site, enabling environmentalists to optimize site-specific management strategies.
Following a proposal supported by over 70 countries around the world, the United Nations (UN) has proclaimed the Decade on Ecosystem Restoration. With humanity facing biodiversity loss, climate change, and escalating pollution; now is the time to act, as stated by UN Secretary-General António Guterres. Ecosystem restoration has the potential to constrain this "triple environmental emergency". Although restoration activities are increasingly integrated into natural resource and climate mitigation strategies, scientific studies underline that information on their effectiveness and impact is currently difficult to obtain. With the UN Decade on Ecosystem Restoration ending at the same time as the Sustainable Development Goals (SDGs), restoration interventions need to be assessed in a systematic and objective manner to measure and maximize the global community's progress towards the SDGs. However, the long-term and high-quality data records of essential climate variables that are required for this are often lacking in both space and time. Satellite data products can fill this gap, as they can quantify impact by detecting and attributing changes in environmental conditions consistently over time.
Over the last decades, the scientific community has made significant progress in reconstructing multi-decadal historical datasets of climate variables by merging multiple satellites and correcting for biases. Among such long-term climate data records are soil moisture (from 1978 onwards), land surface temperature (since 1995), and land cover (since 2008) datasets of the European Space Agency Climate Change Initiative (ESA CCI). These data records, combined with near real-time observations, offer a great opportunity to assess the effects of restoration interventions on degraded landscapes. By comparing the surface conditions of each restoration project area to those in an unaffected control area, the effects of the restoration intervention can be detected and attributed. If a statistically significant environmental response can be detected at or directly after the time of the intervention, and this change is absent in a nearby control area, then the change can be attributed to the restoration project. The resulting monitoring service enables asset managers and green investments funds to steer decisions and communicate transparently on effectiveness towards their donors.
To monitor restoration interventions affecting smaller areas than the native resolution of these datasets (up to approximately 25 km), downscaling techniques are applied to increase the spatial level of detail (to approximately the 0.1-1 km range). Considering both the native resolution and downscaled products at increasingly higher spatial resolution (e.g. from 25 km to 100 m) and with decreasing temporal coverage (e.g. from 43 to 5 years), both the spatial and the temporal scale needed for measuring the effects of restoration projects can now be investigated.
Within the Restore-IT project supported by ESA (Contract No. 4000136484/21/I-DT-lr), the aim is to provide an impact monitoring tool, based on reliable near real-time satellite data streams in combination with the existing long-term and consistent ESA CCI datasets. The Restore-IT tool can be used as an independent source of information by asset managers and financial investors who wish to monitor the impact of their activities and investments for their stakeholders and the EU in the context of several SDGs (12: Responsible production and consumption, 13: Life on Land, and 15: Climate Action).
Forest ecosystems provide well-known services both to humanity and to the Earth system. The list includes the improvement of air and water quality, the capture, storage and regulation of water flows, the supply of timber and energy, erosion control, the provision of livelihoods for humans and habitat for biodiversity. By serving as a carbon sink, storing vast amounts of carbon and moderating the climate through both biogeochemical and biophysical means, they are also a critical element in the fight against climate change.
Yet forests are under pressure. Forests have been degraded by direct or indirect human activities, including climate change, and large areas across the world have been replaced by crops and pastures. To counter this trend, there is currently a strong momentum for action towards restoration of forest ecosystems, which involves improving the condition of disturbed and degraded forests but also returning trees to former forest land. Whether this consists of planting native tree species or letting the process or rewilding take its natural course, restoring forest landscapes is increasingly considered as a valuable nature-based solution that can contribute towards limiting some of the worse impacts from the current climate emergency. However, to plan forest restoration and afforestation programs it is essential to assess the potential climate impact of a change in tree cover mediated both by biogeochemical effect (i.e. change in the ecosystem carbon budget) and by biophysical effect on the local climate.
In this context, here we present recent research illustrating a potential understudied benefit of restoring forested ecosystems: increasing cloud cover [1]. The potential influence of forest restoration on cloud dynamics stems from forests having a stronger capacity to generate low-level convective clouds than shorter vegetation, such as most croplands and grasslands. This is due to their structure (which increases the mixing of the air and enhance the turbulent transport of energy), their capacity to transpire more given their deeper rooting system (which injects more moisture in the air) and their lower albedo (which can create more uplift by radiative heating). Generating more low-level clouds potentially translates into several benefits, including shade and rain, but also the possibility of brightening the planet by increasing the albedo at the top of the atmosphere (and compensating the darkening at the surface by the trees).
The work represents the first attempt to map such effect at global scale. We show that changing the surface from croplands or grasslands to forests would increase low level cloud cover for 67% of sampled areas across the world. The work further analyses the seasonality and intensity of this effect, and corroborates the finding using different methods and datasets, including ground observations. This effort was made possible by the availability of two key datasets generated by ESA’s Climate Change Initiative (CCI), namely the CCI Cloud and the CCI Land Cover products, which are here combined using an innovative geospatial approach. The resulting assessment can provide guidance that could assist efforts in forest ecosystem restoration, by indicating where these could be prioritized based on the potential to maximize cloud cover. This would further be in line with the design of ambitious nature-based mitigation policies such as the European Green Deal and with the need of climate-proofing forest ecosystems as foreseen in the new European Forest strategy.
[1] Duveiller, G., Filipponi, F., Ceglar, A. et al. Revealing the widespread potential of forests to increase low level cloud cover. Nat Commun 12, 4337 (2021). https://doi.org/10.1038/s41467-021-24551-5
Forests in Morocco play a key role at the social, economic, and environmental levels. Forest ecosystems management requires a variety of information related to their cover and composition (land cover, forest stocking, and growth). Remote sensing and geographic information system offer a valuable method for ensuring good monitoring for the quality of required information and its costs. This work will show and discuss the contribution of these technologies to monitoring forest and land inventory. Google Earth Engine offers a great dataset that can help us for monitoring the evolution of forest change cover in the South of Morocco. This dataset has a variety of resources of information regarding the date and the resolution of the different Satellite imagery. After the presentation of the data needed and their acquisition methods, through acquisition and interpretation, we will discuss the evolution of the land cover during 1986-2000 and 2000-2020. A map of the spatial distribution of forest cover during the period 1986-2000 was developed as part of the present study. The analysis shows that the area of forest tree species totals 678,705 ha or 41% of the territory of the south of Morocco. The holm oak is the most represented species in the studied area. It occupies more than half of the forest area (nearly 58%) and 23% of the study area. Analogically to the preparation of the forest map of the Southern in the period 2000-2020, a new map of the Southern was prepared. The cartographic situation of the Southern in 2020, estimates the occupation of the ground in forest cover to 50%, considering that Fruticeae are not part of the forest cover. Because of this constant, the holm oak contributes to the occupation of up than 50% of the forest cover. It is followed by junipers (14.9%), cedar 9%, and alfa (8.6%) formations. The stratum fruticeae is a new stratum, recently introduced. It occupies 13% of the land without forest vegetation.
Launched in December 2015, the African Landscape Restoration Initiative (AFR100) aimed to build a critical mass movement to bring 100 million hectares of degraded forest and land under restoration by 2030. It aims to accelerate restoration to enhance food security, increase climate change resilience and mitigation and combat rural poverty. AFR100 Phase I (2016-2020) set out to engage African political leadership to obtain commitments at the ministerial and donor level, assist several countries to develop national strategies, identify and prioritize areas of restoration, and facilitate partnerships and peer-to-peer learning exchanges. Reaching 127 million hectares across 32 countries, commitments from governments exceeded expectations, putting Africa at the forefront of the forest landscape restoration (FLR) movement.
Five years into the AFR100 initiative, it is still unknown how much and where land is under restoration. The African Union Development Agency (AUDA-NEPAD) is filling this gap by providing a centralized, stock-taking platform that will use cutting edge earth observing technologies to track change on land as well as perceptions of impact by communities. AUDA-NEPAD has developed a monitoring framework, and with the contribution of tools, resources and data provided by key AFR100 partners, governments and FLR practitioners will soon be able to track the progress of their AFR100 commitments and restoration efforts. Systematic monitoring of restoration is notoriously complex. Unlike forest loss, which is immediate and detectable at lower spatial resolution, restoration is a slow process and growth of young vegetation requires higher resolution imagery products. In addition to its diversity of ecosystems and rainfall regimes, Africa contains substantial dry forests, woodland savannas and mosaiced landscapes interspersed with farms, all complex conditions requiring advanced earth observing technologies. Types of FLR interventions are broad in scope and involve monitoring for trees in one place and soil erosion control in another, which requires different approaches in monitoring data products. Users will be able to digitize or upload their areas of interest (AOIs) onto a web-based GIS and receive information on restoration, climate and biodiversity.
In remote regions of the Amazon Forest indigenous communities play a critical role in ecosystem restoration as often they are the only one who have physical access to those areas. The efforts of local communities however are often not acknowledged because the impact of their activities is not measured. The purpose of this research is to quantify the effect of reforestation on local climate and determine how much carbon is stored in newly reforested areas taking into account also how resilient the newly planted trees are. These findings are important not only for understanding the local impacts of specific reforestation practices but also for empowering local indigenous communities and highlighting their role in ecosystem restoration. Tracking reforestation processes in the tropics is not a straightforward task as the area is often covered with clouds or smoke from burning vegetation during land clearing activities. This is where the combined use of EO images, in-situ data from UAVs and field measurements come into play. For example, high resolution land surface temperature can be derived by fusing Sentinel-3 or Landsat thermal data with Sentinel-2 or drone images employing machine learning techniques. Machine learning techniques have been successfully used to bridge the gap in spatial and temporal resolution between different sensors. However, operational fusion products based on multiple sources of data are still rare, making ad-hoc models necessary. The uncertainties introduced in such artificially created products require validation through field measurements, the collection of which strongly relies on local knowledge. This study will showcase a practical approach to map, measure and monitor carbon sequestration, temperature and precipitation on a fine spatial scale in selected plots in the Ecuadorian Amazon. Knowing how much carbon is stored in trees will enable indigenous communities to participate in national REDD+ programs and independently audit the health of their forests.
Atmospheric water plays a key role for the Earth’s energy budget and temperature distribution via radiative effects (clouds and vapour) and latent heat transport. Thus, the distribution and transport of water vapour are closely linked to atmospheric dynamics on different spatiotemporal scales. In this context, global monitoring of the water vapour distribution is essential for numerical weather prediction, climate modelling, and a better understanding of climate feedbacks.
Total column water vapour (TCWV), or integrated water vapour, can be retrieved from satellite spectra in the visible “blue” spectral range (430-450nm) using Differential Optical Absorption Spectroscopy (DOAS). The UV-vis spectral range offers several advantages for monitoring the global water vapour distribution: for instance, it allows for accurate, straightforward and consistent retrievals over ocean and land surface even under partly-cloudy conditions.
To investigate changes in the TCWV distribution from space, the Ozone Monitoring Instrument (OMI) on board NASA’s Aura satellite is particularly promising as it provides long-term measurements (late 2004-ongoing) with daily global coverage.
Here, we present a global analysis of trends of total column water vapour retrieved from multiple years of OMI observations (2005-2020) and put our results in context to TCWV trends from other climate data records (e.g. reanalysis models or satellite measurements). Additionally, we investigate if the assumption of constant relative humidity over climatological time periods is fulfilled.
Furthermore, we demonstrate that it is possible to infer changes in the global atmospheric circulation directly from the global TCWV distribution: More precisely, we show that trends in changes in the location of the Hadley cell as well as its poleward expansion can be determined from the latitudinal TCWV distribution using a simple, straightforward and robust methodology.
Mapping mountain glacier facies using moderate resolution satellite data has been demonstrated by a few existing studies for instance using Landsat series of data. However, high-resolution mapping of glacier facies is still a less explored area of research. The present study focuses on the high-resolution mapping of glacier facies using object-oriented and pixel-oriented methods on very high-resolution WorldView-2 multispectral and panchromatic images. We conducted this experiment on two study locations to prove the robustness of our analyses. These study locations are (1) glaciers around Ny Ålesund, Svalbard (2) glaciers around Chandra basin, Himalayas. Our experiment was supplemented with testing of various atmospheric correction methods and pansharpening methods and their effect on final mapping accuracies. Selected sets of object-based and pixel-based methods were implemented to derive discernible glacier facies and compared final results using error matrices. Next, we evaluated the importance of spectral information and its effect on mapping glacier facies. After that, the effect of spatial information was tested in terms of different pansharpening methods used in the analysis. Presently, we are compiling all the results and we expect to present these results during the symposium. In our presentation, we will highlight results from these experiments and the effectiveness of using high-resolution spectral-spatial data for glacier facies mapping. This experiment essentially tests the impact of pre-processing methods and advanced sharpening methods on mapping glacier facies using pixel-based and object-based methods. This study also highlights the impact of the operator in identifying different facies on the satellite imagery. Overall, this study will guide operational high-resolution facies mapping protocols. Implications of this study can be re-tested on the medium resolution and coarse resolution satellite data over other cryospheric regions. Eventually, this experiment will provide a solid foundation for future studies dealing with mapping glacier facies using optical satellite data and operational product generation using the Copernicus series of ongoing and upcoming satellites.
We use 40 years (1980-2019) of intercalibrated brightness temperature data from the High-Resolution Infrared Radiation Sounder (HIRS) onboard the NOAA series of satellites, to produce a 40-year data set of Upper Tropospheric Humidity with respect to Ice (UTHi; 2.5° × 2.5° geographical data grid). The UTHi values are derived from the measured brightness temperatures in HIRS channels 12 (upper tropospheric water vapor channel) and 6 (upper tropospheric temperature channel which is in the CO¬2 absorption band) using a new retrieval method by Gierens and Eleftheratos (2019) together with application of intercalibration coefficients from Shi and Bates (2011). As channel 6 is in the CO2 absorption band and CO2 concentrations have increased since 1980, channel 6 brightness temperature records have been influenced negatively by approximately 2 K in the 40-year period (Shi et al., 2016). We corrected the T6 data, by applying a correction formula on every channel 6 brightness temperature daily value, based on the global CO2-related brightness temperature decrease in the 40-year period and the month-to-month change of CO2 with a reference CO2 value of 370 ppm. We then applied the second-order retrieval formula of Gierens and Eleftheratos (2019), using the intercalibrated T12 and the CO2-corrected T6 data, to calculate the UTHi in a 2.5° × 2.5° data grid for 70°S-70°N. We present the new dataset of Upper Tropospheric Humidity from the 1980s to the 2010s across the globe.
Water vapour is the dominant greenhouse gas in our atmosphere and plays a crucial role in the radiative balance and the hydrological cycle of the atmosphere. With the rapid warming of the atmosphere, the surface water vapour concentration has increased giving rise to a positive climate feedback effect (Dessler et al., 2013; Held and Soden, 2000). Raman lidars have become one of the prime tools for measuring the atmospheric water vapour content and temperature of the troposphere. We will present relative humidity retrievals from the Raman Lidar for Meteorological Observations (RALMO), located at the Swiss Meteorological Services (MeteoSwiss) facility in Payerne, Switzerland. RALMO is a fully automated lidar that has been operating since 2007. Gamage et al. (2020) have presented a new method in which RALMO measurements have been assimilated with ERA5 data (fifth-generation European Centre for Medium-Range Weather Forecast reanalysis data), improving the reanalysis data. This optimal-estimation retrieval allows relative humidity to be found directly from the lidar measurements, as opposed to determining the temperature and water vapour mixing ratio separately and then combining them to make a relative humidity product. The retrieval also allows a full uncertainty budget to be calculated, yielding both the random and systematic uncertainties. One of the first goals of our project is to use this new approach to reprocess 5 years of RALMO data for relative humidity, a period where RALMO used the same detector system. The retrievals will be validated against radiosonde measurements. We will produce a relative humidity data set with high temporal and spatial resolution, together with a profile-by-profile uncertainty budget. Our future work is to reprocess the entire 12 years of RALMO data, calculate a relative humidity climatology in the free troposphere, and search for trends in the relative humidity as a function of altitude, complementing the RALMO water vapour mixing ratio climatology and trends found by Hicks-Jalali et al (2020) by using relative humidity.
The ESA Sentinel-1 SAR mission has demonstrated its capability to map the atmospheric Precipitable Water Vapor (PWV) across extensive areas with high spatial resolution. Combining several orbits and merging consecutive PWV maps can reduce the temporal sampling of PWV over a specific region from days to hours, depending on the extension of the region. The assimilation of those maps in a numerical weather prediction (NWP) model has shown a significant advantage relative to the GNSS products. Furthermore, it has been proved that the assimilation of InSAR-derived PWV, can help to better model phenomena as deep convections with an impact on the correct forecast of extreme weather events as intense rainfalls. This also has crucial importance on Civil Protection practices as it is tightly related to flood and other hydrogeological risks. In this work, we estimated PWV maps from interferograms generated using SLC images acquired by Sentinel-1 A and B to assess the viability of including these maps in the initialization of NWP models. Despite the limits imposed by the 2-satellite system's 6-day image return period, it is found that for a sufficiently large domain configured to contain a set of images every 12 h (at varying locations), the influence on model performance is beneficial or at least neutral for normal weather conditions. The suggested methodology is tested for a domain comprising Iberia in 24 consecutive 12-hour forecasts, covering two Sentinel-1 cycles and 214 SAR images. A statistical study of the forecast precipitable water vapor (PWV) versus independent GNSS observations found significant improvements in the various scores, particularly over three days, when the standard initial data was less precise. Even while the mean impact of PWV assimilation was not significant, an examination of the rain forecasts against gridded remote sensing observations shows an overall improvement in the grid-point distribution of different precipitation classes throughout the simulation. It is stated that present InSAR data are currently a viable source for NWP models and will become more relevant when additional systems are deployed, as the proposed missions to launch radar satellites in a geo-synchronous orbit with a continuous view of Europe and Africa continents.
ACKNOWLEDGMENTS
This study was funded by FCT-Instituto Dom Luiz under Projects UIDB/50019/2020-IDL and EXPL/CTA-MET/0671/2021.
Session: A5.04 Water Vapour and its Role in Climate
Type: Poster
The measure of Water Vapor (WV) in the lower troposphere is a critical issue,
which still leaves important margins for improvements of observational data
quality, as necessary for instance to enhance the forecast performance of
Numerical Weather Prediction systems.
To address this problem, a novel measurement technique for obtaining the
Integrated Water Vapor (IWV) along a microwave link and based on a pair of
attenuation measurements has been proposed years ago. This method is called NDSA
(Normalized Differential Spectral Attenuation), and is based on measuring the
"spectral sensitivity", namely the normalized incremental ratio of the spectral
attenuation, that was found to be linearly related to the IWV along the
radio-link path operating in the Ku and K frequency band. Some ESA studies have
shown the NDSA capability to effectively estimate the IWV along the path between
two counter-rotating Low Earth Orbit (LEO) satellites - one carrying a
transmitter, the other a receiver - in a limb measurement geometry. The
resulting link passes through the atmospheric limb crossing the lower
troposphere with tangent heights smaller than 10 km, but involves higher
tropospheric layers also. Currently, the the Italian Space Agency is supporting
a project named SATCROSS, whose purpose is to provide a pre-feasibility study
for a different measurement concept, as the number of LEO satellites is higher
than two and they are orbiting in the same plane and along the same direction In
fact, the SATCROSS project investigates the possibility to retrieve
two-dimensional (2D) water vapor fields using a train of such co-rotating LEO
satellites , displaced so that the links connecting the transmitting satellites
to the receiving ones scan an annular sector of the troposphere, where
appropriate tomographic inversion techniques can be applied to retrieve the 2D
fields from the available set of IWV measurements obtained through the NDSA
technique .
SATCROSS comprised several activities, from the simulation of signals and of
two-dimensional algorithms to the characterization of a real Cubesat mission
payload. This presentation focuses on one particular activity: an instrument
prototype operating in a ground-to-ground link configuration and the development
of a measurement campaign. The prototype is an upgrade of a previous one and is
designed and implemented to demonstrate the effectiveness of the NDSA
measurements at 19 GHz. We will report and discuss the instrument's
architecture, the road-map of this experimental activity, and the insights
brought by the the analysis of the data obtained from the measurement campaign.
In particular, we will focus on the sensitivity of the NDSA measurements to the
IWV by comparing it with existing datasets of Relative Humidity (RH) and IWV
measurements obtained from other consolidated measurement techniques and
instruments.
We present a synergistic day-time total column water vapour (TCWV) retrieval for the combination of the Ocean and Land Colour Imager (OLCI) and the Sea and Land Surface Thermal Radiometer (SLSTR) onboard Copernicus' Sentinel-3 platforms. The retrieval is built in a modular approach and consists of two parts: One forward model for the split-window bands (SW) at 11 and 12 µm in the Thermal Infrared (TIR) and one for the Rho-Sigma-Tau absorption peak at 900 nm in the Near Infrared (NIR).
The TIR forward model is based on Radiative Transfer for TOVs (RTTOV). We previously implemented it in a TCWV algorithm developed for the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). The NIR forward model is based on the look-up-table from the Copernicus Sentinel-3 OLCI Water Vapour product (COWA) algorithm.
The two models are seamlessly joined with an iterative optimal estimation method. The good performance of NIR-TCWV retrieval over bright surfaces (i.e. land, glint) is extended with the capability of retrieving TCWV more accurately over dark surfaces (i.e. ocean).
With slight modifications to the framework of this algorithm it can be used for other polar-orbiting instruments such as the NASA's Moderate Imaging Spectrometer (MODIS) or future METimage on EUMETSAT Polar System - Second Generation (EPS-SG). An adapted version of the processor is being developed as well for the future Flexible Combined Imager (FCI) onboard EUMETSAT's geostationary Meteosat Third Generation (MTG). This will make high-precision TCWV available on a temporal resolution of 10 minutes.
We will show first validation results with ground-based measurements, i.e. Microwave Radiometer (MWR), Global Positioning System (GPS) and Aerosol Robotic Network (AERONET) observations of TCWV. Furthermore we will discuss future applications for the datasets generated from the algorithm. These include the creation of climatologies to support the successor of ESA's water vapour CCI and GCOS' GEWEX Water Vapor Assessment (G-VAP). Further research on the detection of convective initiation in TCWV fields before the onset of cloud formation and precipitation to improve nowcasting will benefit from OLCI-SLSTR's high spatial resolution as well as MTG-FCI's high temporal resolution.
This paper analyzed the variation of sea level trends along the Australian coast zone (0-100 km) using 16 years (2002-2018) of tide gauge records, reprocessed Jason data and ESA CCI sea level product. The EOF analysis demonstrates that local sea levels are not only significantly affected by the ENSO-related signals, but also moderately correlated with the Indian Ocean Dipole. A multivariate linear regression model with optimized noise model is thus built to reduce the impact of natural variability on the trend estimation. The results show that the geographical pattern of sea level rise from both reprocessed Jason data and ESA CCI sea level product are highly con-sistent, with the rates decreasing from the northeast of Australia coasts (6-8 mm yr-1) anticlock-wise to the southeast of Australia coasts (2-4 mm yr-1). Moreover, the sea level trends from repro-cessed Jason data are in good agreement with those from tide gauge records, but are ~1.5 mm yr-1 higher than those from the ESA CCI sea level product. This discrepancy between two datasets is acceptable considering the trend uncertainty (i.e. one standard deviation) from 20-Hz along-track points is at the level of 1-2 mm yr-1. It is also found that the variation of sea level trends is significant in the southeast of Australia, with the value decreasing from 5.8±1.5 mm yr-1 offshore to 3.4±1.2 mm yr-1 towards the coast. This may be because the continental slope precludes the propagation of sea levels from the East Australian Current dominated oceans to the coast.
Recent advances in satellite altimetry processing have resulted in the improvement of a wide variety of oceanographic applications. These advances have been significant in the estimation of ocean tides in the coastal region, which is a process that itself is important for studies of surface processes from satellite altimetry. Continued sampling on the same altimeter tracks, such as that provided by the Jason-series of altimeters, has allowed for the more reliable estimation of ocean tides, furthermore, coupling this with recent advances in the processing of altimetry data has resulted in significant advances in the accuracy of ocean tide estimations. A global empirical ocean tide model (EOT20) was developed at DGFI-TUM which was based on using residual analysis of multi-mission satellite altimetry data. EOT20 benefits from the use of an updated FES2014b ocean tide model as the reference tide model as well as the use of ALES retracker. These changes amongst several others, such as improved coastline representation and updated gridding techniques, have resulted in EOT20 showing a significantly improved coastal representation of ocean tides compared to several global tide models, including its predecessor EOT11a. Validation of the model was performed using in-situ tide gauge observations and gridded sea-level variance analysis. Compared to tide gauges, EOT20 showed an improvement in root-mean-square error (RMS) of ~0.2 cm with respect to the next best tide model, FES2014. When compared to the predecessor, EOT11a, EOT20 showed an overall improvement in the model, with a large improvement of ~ 1.1 cm compared to coastal tide gauges. The gridded sea level variance analysis demonstrated a reduction in variance when using the EOT20 ocean tide model compared to the use of FES2014 and EOT11a models in the coastal region. Overall, improvements made to the EOT model both in terms of coastal altimetry, improved reference tide model and model formation, provide encouragement for the use of the EOT20 ocean tide model as an ocean tidal correction for satellite altimetry in studies of coastal sea level processes.
There is heavy dependence on in-situ measuring techniques for gauging water quality. In- situ measurements are costly and can be dangerous. This poses the question: how do we assess water quality safely with more cost and energetic efficiencies? Remote sensing is an answer to this question. I am developing an algorithm to overlay onto Sentinel 2 images using ENVI, an advanced computer software application (www.l3harrisgeospatial.com/Software-Technology/ENVI). I am validating my satellite pixel-based algorithm using my in-situ water quality measurements around four major Georgia tidal watersheds on the South Atlantic Bight. By collecting satellite imagery from Sentinel 2, I hypothesize that with the set of algorithms I am developing, I can provide a rapid assessment of water quality based on total suspended solids (TSS), Chlorophyll-a (Chla) and Color Dissolved Organic Matter (CDOM) on the inshore and offshore regions of the South Atlantic Bight (SAB).
There are apparently no singular, unified, rapid and comprehensive remote sensing algorithms in the literature for simultaneous estimates of TSS, Chla and CDOM. My goal is to address this gap in the literature and attempt to put forth a comprehensive algorithm that addresses all these factors.
During summer residence with the Schalles Lab research crew at the University of Georgia Marine Institute on Sapelo Island, under NSF Georgia Coastal Ecosystems LTER support, we collected 54 in-situ measurements; 45 in-situ measurements were taken within two hours of a Sentinel 2 pass and all 54 were taken within 24 hours of the satellite pass. Our measurements included values for TSS, Chla, CDOM, temperature, salinity, downwelling solar irradiance and water leaving radiance using Ocean Optics Spectroradiometer instruments. The Ocean Optics measures gave us very accurate, high spectral resolution (~ 1,200 wavelengths) for matchups with satellite pixel’s water reflectance spectra. This data is used to validate and ensure the accuracy of my satellite pixel-based algorithm retrievals of algal Chla, TSS and CDOM. These measurements were made in transects from inshore to offshore waters. I focused on four tidal watersheds and the dynamics and fates of their offshore plumes: Altamaha River, Doboy Sound, St. Simon’s Sound and Sapelo Sound. Each of these estuaries have unique properties, sources, discharges and outputs into the Atlantic Ocean. Additionally, at this particular area in the SAB, mass amounts of water are fluctuated in and out because of the dramatic high and low tides, semi-diurnal tide cycles and different mixtures of algae and TSS it receives, making it an ideal study location.
For this study I have acquired six images from the Sentinel 2a and 2b pair (ground resolution of 20m). I first used my in-situ-satellite matchup reflectance values to assess four different types of atmospheric correction methods. I moved forward with the method which performed the best. I am now attempting to integrate the best performing published algorithms on TSS, Chla and CDOM retrieval into one comprehensive algorithm. I am using a versatile and sophisticated software platform ENVI (version 5.6.3, L3 Harris Geospatial, Inc) to process and overlay my TSS, Chla, and CDOM algorithms onto the images. Additionally, I have access to the Schalles Lab’s comprehensive spectral library (nearly 800 inland, coastal, and offshore sample stations) which has greatly assisted in the interpretation of the satellite data. This Creighton-based data set has been shared with other scientists and has become widely used in the testing of potential algorithms for Chla, TSS, and also CDOM. I will then hopefully be able to classify individual water pixels quantitatively and produce colorized map products of water quality to assist other researchers and management agencies, resulting in a comprehensive and rapid assessment of water quality and ecological conditions along the Georgia Coast and adjacent South Atlantic Bight of the Western Atlantic Ocean.
I am deriving a refined set of algorithms with careful analyses of coastal satellite imagery which will hopefully provide a cost and labor efficient assessment of water quality based on TSS, Chla, and CDOM that will be achieved by comparing the accuracies of several published prediction algorithms and, as necessary, use in-situ matchup measurements to interactively improve the best model(s) for conditions within the SAB. Rapid remote assessments of water quality ultimately reduces the amount of in-situ measurements that are needed, which then reduces energy, cost and the possibilities of injury during in-situ work. It also allows for water quality to be assessed and traced on a grand scale, including locations globally without resources to do in-situ evaluations.
Sea level variations in coastal areas can differ significantly from those in the nearby open ocean. Monitoring coastal sea level variations is therefore crucial to understand how climate variability can affect the densely populated coastal regions of the globe. In this paper, we study the sea level variability along the coast of Norway by means of twenty-three tide gauges, satellite altimetry data, and a network of eight hydrographic stations over a period spanning 16 years, from January 2003 to December 2018. At first, we evaluate the performance of the ALES-reprocessed coastal altimetry dataset, at 1 Hz posting rate, by comparing it with the sea level anomaly from tide gauges and from conventional altimetry over a range of timescales, which include the long-term trend, the annual cycle, and the detrended and deseasoned sea level anomaly. We find that the coastal and the conventional altimetry products perform similarly along the coast of Norway. However, the agreement with the tide gauges in terms of linear trends is on average 10% better when we use the ALES coastal altimetry data. We later assess the steric contribution to the sea level variability along the Norwegian coast. While longer time series are necessary to evaluate the steric contribution to the sea level trends, we find that the sea level annual cycle is more affected by variations in temperature than in salinity, and that both temperature and salinity give a comparable contribution to the detrended and deseasoned sea level change along the entire coast of Norway. The results in the paper go in the direction of obtaining an accurate characterization of coastal sea level at the high latitudes based on coastal altimetry records, which can represent a valuable source of information to reconstruct coastal sea level signals in areas where in situ data are missing or inaccurate.
The DUACS system (Data Unification and Altimeter Combination System) produces high quality multi-mission altimetry Sea Level products for oceanographic applications including climate signal detection, forecasting and mapping of the physical state, connection with biology and biogeochemistry. These products consist in directly usable and easy to manipulate Level 3 (L3; along-track cross-calibrated SLA) and Level 4 products (L4; multiple sensors merged as maps). The DUACS production is done as part of different projects/services and products are available in global and regional version, for near real time (NRT) applications and/or offline (DT) studies.
The regional production over the Black Sea region has been initiated in 2010. Since then, different product evolutions have been implemented with the objective to improve the product content and quality. During the last years, joint contributions from ESA (EO4SIBS project), CNES (DUACS-RD project) and Copernicus/CMEMS largely contributed to the Black Sea L3 and L4 products improvement. We present here these evolutions.
First, L3 product with a nearly 1km (5Hz) sampling has been developed. Derived from the full rate altimeter SAR measurements and including new processing for noise level reduction, this L3 product allows to observe wavelengths 15 to 20km smaller than the conventional 1Hz (7km) sampled product. It also allows to observe the signal closer to the coast (up to ~5km).
Then, the L4 product quality was significantly improved, with eddies and currents better resolved, in particular near the coast. This is due to the combined use of the previous L3 5Hz product and new covariance function constraint with bathymetric gradients (Davis et al, 1998). The obtained SLA is more consistent with independent altimeter measurements (error reduction by 10 to 20%).
Finally, a new Mean Dynamic Topography (MDT) solution was computed by merging information from altimeter data, GRACE and GOCE gravity data and oceanographic in-situ measurements from drifting buoy velocities and hydrological profiles. This new solution has been compared to the previous MDT of Kubryakov et al (2011) computed with a similar method, without gravity data. The new solution shows better consistency with drifter’s data kept for validation. Being consistent with the [1993, 2012] reference period used in the DUACS SLA production, this MDT allows us to access to the Absolute Dynamic Topography (ADT) and geostrophic currents over the basin.
The new DUACS/EO4SIBS Sea level products are currently available through the EO4SIBS portal (http://www.eo4sibs.uliege.be/ ). All the joint ESA, CNES and CMEMS developments will be eventually integrated in CMEMS as part of the European regional product: a full (30 years) reprocessing of the Black sea time series (as part as a larger European area) will be made available end of 2021 at 1Hz (L3) and 1/8° regular grid (L4). Corresponding NRT product line will also evolve in the same way. Additionally, the L3 5Hz NRT production will start in CMEMS in 2022
The knowledge about bathymetry and ocean tides is at the crossroads of many scientific fields, especially in the Polar regions, as it has significant impact on ocean circulation modelling and the understanding of the coupled dynamical response of the ocean, sea ice and ice shelves system, the quality and accuracy of sea surface height and sea ice parameter estimates from satellite altimetry, or the understanding of ice-shelf dynamics, among others. In isolated regions such as the Southern Ocean, where very few in-situ campaigns are possible, satellite observations bring invaluable information, either directly, with the physical parameters that are measured, or indirectly, considering the strong links between particular characteristics of the parameters and the ocean processes.
The ALBATROSS project (ALtimetry for BAthymetry and Tide Retrievals for the Southern Ocean, Sea ice and ice Shelves), led by NOVELTIS in collaboration with DTU Space, NPI and UCL, is one of the activities funded by the European Space Agency in the frame of the Polar Science Cluster, with the objective to foster collaborative research and interdisciplinary networking actions.
ALBATROSS is a 2-year project that started in mid-2021 with several objectives: first, to improve the knowledge on bathymetry around Antarctica, considering decade-long most recently reprocessed CryoSat-2 datasets, innovative information on bathymetry gradient location through the analysis of sea ice surface roughness characteristics, and the compilation of the best available datasets in ice-shelf regions; second, to improve the knowledge on ocean tides in the Southern Ocean through the implementation of a high-resolution hydrodynamic model based on the most advanced developments in terms of ocean tide modelling, and data assimilation of observations, including satellite-altimetry derived tidal retrievals from the most recent and relevant satellite altimetry products to fill the gap between the 66°S-limited coverage of the Topex-Jason suite missions and the Antarctica coast.
This paper presents the most recent results obtained within the ALBATROSS project.
Radial orbit error defined as the uncertainty in the geocentric altitude of a satellite, is the dominant error in the measurement of sea surface height (SSH) by a satellite altimeter. Despite precise orbit tracking systems (Laser, GPS and DORIS), residual radial orbit error of satellite altimeters still can be identified due to residual gravity field uncertainties. Therefore, reducing the radial orbit error can significantly improve the quality of SSH measurements.
Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) operated by NASA was launched in September 2018. The Advanced Topographic Laser Altimeter System (ATLAS) on the mission carries six beam laser transmitters with photon-counting detectors, which emit 532-nm laser in a 10 kHz repetition rate. ATLAS detects individual photons in a 70 cm separation for each shot on the earth’s surface in the along-track direction with ~17 m diameter footprint.
Due to the properties of laser, ICESat-2 performs better in sea-ice regions compared to radar altimetry satellites. Apart from altimetric ranging error, radial orbit error is the dominant error in the measurement of sea surface height (SSH). This error is recoverable by analyzing the difference of SSH at ground track intersections (crossover differences). An effective approach to the problem is to model orbit error by minimizing the residual crossover difference in the least squares method. Crossover difference is mainly caused by the difference of radial orbit errors between ascending and descending arcs, sea surface variation, and measurement error. Since sea surface variation in a short time interval and measurement error can be considered as random variables, these residuals can be reduced by crossover adjustment.
For the least squares adjustment, parameterizing plays a role in a balance between efficiency and simplicity. Since the orbits of altimeter satellites are almost circular, at least a regional satellite orbit can be described as Keplerian orbit. For this purpose, we describe the local adjustment of crossover differences in the Arctic region. In this study, we demonstrate that ICESat-2 orbit as Keplerian orbit, and a mathematical model for the description of the radial orbit error is available after linearization. A problem in solving the crossover adjustment is the existence of singular solutions, which belong to a so-called null space of the normal equations formed by crossover minimization. Compared to the datum problem of a leveling network, heights can be determined by fixing the height of one arbitrary point. The estimated unknowns (coefficients of error function model) have the property not to affect the crossover differences. The same holds true here, we fix parameters falling inside the null space of the crossover minimization problem. We analyzed the rank defect of the coefficient matrix using discontinuous and continuous segmentation models with 1 cycle/revolution and 2 cycles/revolution respectively, and the performance of different models.
Thanks to its current accuracy and maturity, altimetry is considered as a fully operational observing system dedicated to various applications such as climate studies. Altimeter measurements are corrected from several geophysical parameters in order to isolate the oceanic variability and the tide correction is one of the most critical. The accuracy of tidal models has been much improved for last 25 years leading to centimetric accuracy in the open ocean. The last release of the global tidal model, referenced as FES2014b was distributed since mid-2016.
The underlying unstructured mesh resolution of FES2014b was increased in areas of interest like shallow waters and on the slope of the continental shelves, and the error of the pure hydrodynamic ocean solution has been divided by a factor of 2 compared to previous version (FES2004). Still, some significant errors remain in some regions, due to the omission of compound tides and bathymetric errors (in shelf/coastal seas), seasonal sea ice effects and lack of available data for assimilation (in the high latitudes).
To address the reduction of these errors and facing the new challenges of the tide correction for HR altimetry, in particular the forthcoming SWOT mission, a new global tide model FES2022 has been developed, focusing particularly on shallow waters and high latitudes.
This new tidal solution uses higher spatial resolution in coastal areas, extending systematically the model mesh to the narrowest coastal systems (fjords, estuaries, …), and the model bathymetry has been upgraded in many places thanks to an international collaboration effort. The hydrodynamic modelling benefits also from further improvements which allows producing very accurate hydrodynamic simulations. The use of the most recent altimeter standards and high inclination altimeters like Cryosat-2, Saral/AltiKa and even Sentinel-3, also allowed retrieving some tide observations in the highest latitudes to help improving the polar tides modelling. Preliminary results show a great improvement of the FES2022 hydrodynamic solution compared to FES2014’s one. The assimilation procedure is on-going and a specific loading tide solution will also be produced in the coming months. Some validation results of the new FES2022 global tidal solution are presented here.
Surface wind is an essential variable for the study of ocean dynamics. It is one of the main agents responsible for driving ocean surface currents since it is directly related to the generation of mechanisms such as Ekman’s currents. The role of the surface winds is especially relevant in complex coastal areas such as the Gulf of Cadiz (southwestern of the Iberian Peninsula), where the wind field is highly variable in the spatio-temporal domain. Therefore, improving weather models to accurately simulate the wind speed in complex areas is needed for the study of both the ocean and the associated climate variability. Unlike in-situ data from weather stations and buoys, which allow the evaluation of the temporal evolution of the model’s simulations, wind speed from satellite altimetry enable the analysis of the spatial variability of the wind speed along the satellite track over the ocean
This work presents the capabilities of wind speed (WS) retrievals from the altimeters on-board Copernicus satellites Sentinel-3A/B (S3A/B) for the spatial validation of WS outputs from the Weather Research and Forecasting (WRF) model over the complex coastal area of the Gulf of Cádiz (GoC) in the southwestern Iberian Peninsula. In order to assess the applicability of the altimetry data for this purpose, comparisons between three different WS data sources over the GoC area were evaluated: in-situ measurements (coastal weather stations and offshore moored multi-instrument buoy), S3A/B altimetry data at 20 Hz of posting rate and the WRF model output. Outputs from the WRF model over the area were evaluated with in-situ data, with satisfactory results (WS bias < 0.75 m/s). The spatial variability of the WS derived from the WRF model was compared with the along-track altimetry-derived WS. The analysis was carried out under different wind synoptic conditions. Qualitative and quantitative results (average RMSE < 1.0 m/s) proved the agreement between both data sets under low/high wind regimes, proving that the spatial coverage of satellite altimetry enables the spatial validation of high-resolution numerical weather prediction models in water-covered surfaces, including coastal areas (up to 3 km from land).
This work aims to foster the use of altimetry data for improving the knowledge of wind speed and sea surface circulation over complex areas where the availability of in-situ measurements is limited or inexistent. The spatial coverage of satellite altimetry enables the spatial validation of high-resolution NWP models in any water-covered surface of the world, including coastal areas.
The East Coast of the U.S. has been identified as one of the current sea level rise hotspots. Despite highly developed in situ infrastructure, sea level changes on the shelf, which could elucidate wider links between the open ocean sea level and tide gauge records, remain poorly resolved. To bridge this observation gap, synthetic aperture radar altimeter (SRAL) can be used on some occasions to circumvent the resolution limitations of conventional altimetry, especially in the coastal zone. In this work, we are using Sentinel-3A SRAL, which provides global coverage for a sufficiently long period (>5 years) with state-of-the-art coastal retrackers. The aim of this work is to partition the variability in sea level signal observed from Sentinel-3A SRAL into components by the physical source, quantifying the explained variability and spatial structure.
The primary data source we use is 20 Hz alongtrack sea surface height anomaly (SSHA) at comparison points, set along nominal Sentinel-3A ground tracks within 250 km from selected tide gauges along the U.S. East Coast. To create SSHA timeseries at the comparison points, we use SARvatore service on G-POD, which provides 20 Hz L2 product with SAMOSA++ retracker. To separate the components of observed variability, we use reanalysis and ocean model data to analyse the altimeter signal and sterodynamic component and its constituents, including wind, freshwater discharge, wave setup, ocean currents, and coastally trapped waves, which all significantly contribute to ocean dynamics on the shelf. Using a statistical framework, we infer covariant structures and quantify the contribution of each source to total observed variability.
Attribution to sources of variability increases the confidence in linking observed sea level on the coastal shelf and at the coast, particularly, at tide gauges, which often provide multidecadal or even centurial records of sea level. Thus, this enables not only the comparison of oceanographic applications of SAR altimetry to other observations and ocean models, but also ensures further applications of these spatial variability structures to sea level reconstructions and projections at longer timescales, including provision of the baseline level for extreme sea level.
Through different projects, the Center for Topographic studies for the Ocean and Hydrosphere (CTOH) contributes largely to sea level studies in coastal areas. It has developed the X-TRACK software dedicated to the reprocessing of coastal altimetry data. X-TRACK is now a mature sea level product distributed worldwide by AVISO+ (https://www.aviso.altimetry.fr), cited in many scientific publications. It consists in long time-series of SLA from most altimetry missions, processed homogeneously, but also as long-track empirical tidal constants. The latter provides independent synoptic data all along the coastal ocean, in addition to tide gauge data, for tidal studies, validation of tidal models or assimilation into tidal models. In order to continue provide the most complete sea level datasets to coastal users, the CNES L2P Product (https://www.aviso.altimetry.fr/en/data/products/sea-surface-height-products/global/along-track-sea-level-anomalies-l2p.html) will now be integrated into the X-TRACK processing chain: 12 altimetry missions (from ERS-1 to Sentinel-3B), covering the 1992-2021 time period.
The CTOH contributes also to the development of new coastal sea level products specifically designed for climate studies, in close collaboration with ESA, TUM, CLS, NOC and SKYMAT. As part of the ESA Climate Change Initiative project, the Adaptive Leading Edge Subwaveform (ALES) Retracker (Passaro et al., 2014) and the X-TRACK software (Birol et al., 2017), previously validated and successfully applied to coastal sea level research, have been combined for the first time, in order to reprocess 18 years of sea level anomaly data (Jan. 2002 to Jan 2020) at a high frequency level (20 Hz) based on Jason missions. This new coastal sea level product called X-TRACK/ALES (https://climate.esa.int/en/projects/sea-level/data/) significantly extends the spatial coverage of sea level altimetry data in the coastal direction, now reaching a distance of 1.2-4 km from the coast on average (Birol et al., 2021). A new network of altimetry-based virtual stations in the world coastal zones has also been derived from this dataset (see the work from Cazenave et al.).
In the framework of the ESA Fundamental Data Records For Altimetry (FDR4ALT) project, which aims to reprocess the ERS1-2 and Envisat missions with the most recent standards, the CTOH is also participating in the definition of the best state of the art in terms of processing, algorithms and corrections in order to deliver to the coastal community a high frequency, long term, thematic data product dedicated to the coastal zone.
Because of the repeat period of the satellite altimetry missions (from 10 days for the Topex/Jason suite to more than one year for CryoSat-2), the high-frequency ocean tidal signals are aliased in the altimeter sea surface height measurements at periods that correspond to other ocean dynamics processes. To access the ocean circulation dynamics with the centimetric accuracy expected by the users, it is thus necessary to accurately remove the ocean tide signals from the altimeter measurements.
With amplitudes ranging from several centimetres to several metres, the ocean tide correction is one of the largest corrections to the altimetry sea surface heights on the shelves and in coastal regions. To remove this signal, global tidal models are used, such as FES2004, GOT4.10 and FES2014. However, these models still show large errors on the continental shelves. In some regions, the errors can reach tens of centimetres, as the amplitude of the tidal signals is large and more complex to model due to non-linear interactions between the tidal waves and the shallow bathymetry. With new and future satellite altimetry techniques (SAR, wide-swath) that enable to reach ever more coastal areas, and to resolve the ocean dynamics at ever finer scales, the need for accurate coastal tidal model solutions is salient.
Today, specific efforts are made to improve the tidal models in the coastal regions, thanks to high-resolution modelling and to the use of coastal observations (from altimetry and tide gauges) to constrain the models. Various models are thus available, at global and regional scales. These models are not always provided in the altimetry products, but they could be of high interest to locally improve the coastal altimetry sea surface height retrievals.
In the frame of the HYDROCOASTAL project funded by the European Space Agency, NOVELTIS performed an inventory of the available and most recent global and regional tidal models that could potentially be used as corrections for coastal altimetry data. The performance of these models was compared with a specific focus on coastal and continental-shelf regions where the tidal corrections are particularly critical for coastal altimetry observations. Finally, some recommendations were made about the models that perform best depending on the regions.
Sentinel-3 is part of a series of Sentinel satellites responsible for taking care of a continuous ‘health check’ of the Earth planet under the umbrella of the Copernicus program. The Copernicus program will launch four Sentinel-3 satellites (from A to D) to achieve this goal from 2016 to 2030s. EUMETSAT’s ground segment is responsible for the processing of the Sentinel-3 altimetry data in the marine environment: open ocean, coastal zones and sea level into the sea ice leads.
Since 2016 Sentinel-3A’s SRAL, Synthetic Aperture Radar Altimeter, has been successfully contributing to the continuity of the sea level climate data record. Sentinel-3B launched in 2018 completes the currently operational mission.
Besides the Sea Level, also the Wave Height, Wind Speed are retrieved.
To further improve the quality of the datasets the processing baseline has considerably evolved during the years of the mission.
The Marine datasets besides being made available to the general public, are operationally used by the Copernicus services: CMEMS (Copernicus Marine Environment Monitoring Service) and C3S (Copernicus Climate Change Service).
This presentation will provide an overview of the latest evolutions of the Sentinel-3 SRAL/MWR processing, the relations between Sentinel-3A and -3B processing, and the strategy that EUMETSAT has adopted to provide the consistent long-term data set while continuing to evolve and improve the processing algorithms and standards. The latest reprocessing “BC 005” is the baseline for a quality analysis of the Sentinel-3A Marine Centre data in a multi-mission setting. The latter shall allow for revisiting the status of Sentinel-6 and Jason-3, in comparison with Sentinel-3A and Sentinel-3B. To this goal this presentation aims at: providing multi-mission time series of the main climate records (sea level, significant wave height, wind speed and wet troposphere path delays) in both open ocean and coastal zones; quantifying cross-overs (mono- and multi- mission); as well as provide a 5-year global assessment of SAR mode versus Pseudo-LRM.
Satellite radar altimetry is designed to measure heights at sea over a mesh of ground tracks. It provides the true water level measured by an observer at the coast. It is also capable of directly sensing sea state, thus providing along-track measurements of wave height and wind speed.
In the last years, there has been great interest in improving the quality of altimeter data in the coastal zone, the region where so far resulted in systematic flagging and rejection due to uncertainties in corrections and the complexities of radar returns. Considerable research has been carried out into overcoming these problems and extending the capabilities of radar altimeters as close as possible to the coast.
Improvements in coastal altimetry are now bringing new possibilities to extend scientific studies at the interface between ocean and land, allowing the exploitation of synergies with the available in situ measurements collected at that interface. Those improvements also come from technological advances in altimeters (i.e. the SAR mode altimeter on board CryoSat-2 and Sentinel-3).
A prominent role is taken by the HYDROCOASTAL project funded by the European Space Agency (ESA) and started in February 2020, which aims at generating a global data set for exploitation in rivers, estuaries, deltas and coastal seas. It will permit to better assess the complex spatial and temporal variabilities in these regions that would be difficult to detect with in situ observations alone.
The northern Adriatic Sea is an interesting laboratory where to study the continuum from land to sea. Several important inland water basins are found near the coastal zone: the lagoons of Venice and Grado-Marano, and the Po River with its delta-lagoons system. These areas rest in a fragile balance subjected to physical, geological and biological processes: sea level rise, storm surges, shoreline erosion, subsidence, eustatism, habitat variability, ecosystem dynamics.
In this work, we will start examining the Sentinel-3 and CryoSat-2 altimetric tracks using the state-of-the-art products from the ESA GPOD processor. The aim is to look at the possible presence of coastal processes signatures in the along-track altimetric profiles and time series, which then will be validated by comparison with local in situ and model data and also by looking at other satellite observations. Some of the possible scientific applications are the unveiling of recent trends in sea level rise, the sampling of wind and wave speed at the sea surface up to coast at an unprecedented level of detail, the analysis of the transition zone at the interface between fresh and salt water, to name a few.
Recent studies (i.e., Dinardo et al. 2015; Egido et al. 2021) have demonstrated that for unfocused SAR the current posting rate of 20 Hz is too low. This is due to the fact that the speckle noise fluctuates on shorter along-track distances than the 20 Hz resolution (~160 m instead of ~320 m). This statistically independent information is just disregarded in a 20 Hz posting rate setting and, hence, the RMSE on 20 Hz parameter estimates can be improved by 10 % - 30 % with increased posting rates.
This insight renders previously made comparison results between unfocused SAR processing and FF-SAR as in Egido and Smith (2017) unfair. So our main objective is to perform a more representative comparison of unfocused SAR and Fully-Focused SAR waveform properties and parameters from an identical processor. For our analysis, we consider FF-SAR and unfocused SAR waveforms produced by our recently implemented multi-mission FF-SAR backprojection algorithm based on Kleinherenbrink et al. (2020). Here, the unfocused SAR waveforms are an “emulated” by-product of the FF-SAR processing as in Egido et al. (2021). This allows for a perfectly controlled and fair comparison of different coherent integration times at very high posting rates. Indeed, all processing settings and input data are identical. The emulation of unfocused SAR waveforms is performed by splitting the fully-focused echogram into bursts again, followed by integrating coherently over each burst. This provides essentially the Doppler beam stack, as it contains the contribution from each single burst at an integration time corresponding to ~320 m ground resolution. Additionally, we will compare our emulated unfocused SAR waveforms to the available L1b product.
Our preliminary results for Sentinel-3 indicate that the speckle noise patterns of multilooked FF-SAR waveforms are almost identical to those in unfocused SAR with high posting rate. Consequently, similar estimated number of looks (ENLs) were obtained. Furthermore, the averaged waveform shapes show almost no differences. These results can intuitively be motivated by Plancherel’s theorem, which guarantees energy-conservation of the Fourier transform. This suggests that some earlier obtained disagreements between averaged FF-SAR and L1b unfocused SAR waveforms might be attributed to misalignments in the processing. Causes might be, e.g., windowing settings, FF-SAR single look waveform alignment and the “range walk” (Scagliola et al. 2021). The latter is known to increase the estimated wave height. If the shape of the averaged waveforms can be confirmed to be similar, then the confidence in the SAMOSA-based retracking for multilooked S3 FF-SAR waveforms can be increased.
Dinardo, S., Scharroo, R., Benveniste, J., 2015. SAR Altimetry at 80 Hz: Open Sea, Coastal Zone, Inland Water. Ocean Surface Topography Science Team Meeting.
Egido, A., Dinardo, S., Ray, C., 2021. The case for increasing the posting rate in delay/Doppler altimeters. Advances in Space Research 2, 930–936.
Egido, A., Smith, W., 2017. Fully Focused SAR Altimetry: Theory and Applications. IEEE Transactions on Geoscience and Remote Sensing 1, 392–406.
Kleinherenbrink, M., Naeije, M., Slobbe, C., Egido, A., Smith, W., 2020. The performance of CryoSat-2 fully-focussed SAR for inland water-level estimation. Remote Sensing of Environment, 237, 111589.
Scagliola, M., Recchia, L., Maestri, L., Giudici, D., 2021. Evaluating the impact of range walk compensation in delay/Doppler processing over open ocean. Advances in Space Research 68, 937–946.
When in need of an observation based sea level data set there are several options. Tide gauge stations along the coast have been measuring relative sea level changes for numerous decades at a high temporal resolution (often several times an hour). Tide gauges are by nature point measurements; hence, tide gauge observations alone are not sufficient to show large scale ocean dynamics. This can be achieved by assimilating the observations into ocean models, resulting in the best estimate of the full ocean state back in time (hindcast) or as basis for future projections (forecast). Despite being the best estimate of the ocean state, it is not solely observation based anymore and will therefore contain error sources from both tide gauge observations as well as the ocean model and all data used by the ocean model.
Another option is to use satellite altimetry. Satellites measure the absolute sea level in individual tracks. As these tracks are repeated days apart, one is given a choice: either to have instantaneous observations for a single track or to collect several tracks over a time period, resulting in a better spatial cover, but filtering out the high temporal variability of the sea level changes. This will be sufficient for some purposes, but for others the high temporal variability is essential.
To accommodate this gap we propose a third option. Shortening the time period allowed to collect altimetry tracks to 3 days preserves most of the sea level variability while providing a sufficient spatial coverage. Combining these 3-day mean altimetry data sets with 3-day mean tide gauge observations and error statistics from a storm surge model using the DMI Optimal Interpolation (DMI-OI) resulted in an experimental 3-day mean sea level data set covering the Baltic Sea for year 2017. This work was begun in the ESA project Baltic+SEAL and continued in the National Center for Climate Research in Denmark. The presentation will focus upon the development of the methodology, processing of the 3-day data set and validation against independent observations. The experimental data set captures the overall structure and variability of the Baltic Sea sea level well and shows potential as an alternative to traditional altimetry products with a lower temporal resolution.
Ocean observation is at the heart of many research projects. Various publications have shown that accurate monitoring of sea levels can contribute to the understanding of climate change. In the coastal environment, precision sensors include coastal tide gauges, laser altimeters and Global Navigation Satellites Systems (GNSS) buoys. The latter have various objectives such as the calibration of satellite altimeters and coastal tide gauges, the elaboration of marine maps, the precise localization of measurements made by sensors attached to the buoy. We can mention in particular the GNSS buoys developed by the French institutes SHOM, INSU or IPGP.
Current GNSS buoy systems use classical differential positioning techniques such as Real Time Kinematic (RTK) or Precise Point Positioning (PPP) for height and sea state measurements. These techniques are limited to temporal resolutions of the order of the hertz to provide a positioning of the buoy with an accuracy of the order of ten centimeters. Such a resolution does not allow to fully observe the rapid variations of sea state. To overcome this limitation, we propose a new approach allowing a centimetric positioning accuracy, with an improved temporal resolution of 50Hz.
The proposed approach is based on the observation of the phase difference perceived by an antenna located on a buoy at sea and a fixed antenna located on the ground, in close neighborhood. The GNSS phase measurement allows centimetric positioning accuracy with integration times down to 1ms in the case of GPS L1 signals. In this work, the position variations of the antenna at sea are obtained from the phase difference observations using multiple linear-circular regression. Indeed, the phase difference is an angle which follows a linear model depending on the three parameters that define the relative position of the antenna at sea with respect to the ground antenna. By the regression approach, each satellite signal can provide an estimate of these parameters. It is then possible to improve the accuracy and temporal resolution of the position estimates by fusing the information obtained from the available satellite signals. It is shown through a theoretical study that for a classical GPS satellite constellation, it is possible to reach a millimetric accuracy for a measurement rate of 50Hz.
An experiment in a controlled environment has been performed to assess the proposed GNSS buoy approach. A repeated and known trajectory simulating the movement of an antenna at sea is performed in order to allow to determine the accuracy of the obtained results. It is shown that, on real signals, centimetric accuracy is achieved for a measurement rate of 50Hz.
Coastlines are not fixed in their position but moving continuously. One group of factors influencing coastline positions are morphological processes like sediment transport, accretion or erosion by waves, storm surges or currents as well as morphological changes associated with vegetation through sediment trapping or human interventions. Furthermore, vertical land motion leads to a change in relative sea-level height, which in turn leads to a larger or smaller inundated area. The same effect has a change in absolute sea-level height. Additionally, all changes in sea-level height can lead to changes in coastal morphology. Coastal zones are therefore threatened by several factors, including climate change induced sea-level change, morphodynamic responses and local issues such as uplift or subsidence, e.g. due to the extraction of water or oil.
We are investigating ways to separate these three groups of influences on changes in coastal geometry from each other. For this we will analyse coastal sea-level heights from multi-mission radar altimetry in combination with other height datasets like tide gauge measurements, GNSS observations and land elevation data from LiDAR and bathymetry. The combined assessment of these datasets also allows for resolving vertical land motion. Series of snapshots of coastal land-water interfaces will be extracted from optical remote sensing images using Sentinel-2 and Landsat data.
In this contribution, we will show a case study for the barrier island Terschelling (Netherlands). First, we will show an accuracy assessment of the retracked altimetry products. With the GNSS observations it will be possible to compare the sea-level heights from altimetry to the tide gauge measurements, which will help us to explore if or how much sea-level changes are happening in the vicinity of the coast. Additionally, this will give us information about the amount of vertical land motion in this area. Furthermore, we will investigate how well the optical remote sensing images used for coastline extractions can be matched with the passes from radar altimeters in time and space.
Coastal zones, estuaries and inland waters are among the environments most affected by anthropogenic impact and climate change and are multi-risk due to coastline retreat, flooding and pollution. Accurate knowledge of water height is of major importance to analyze and understand causes and drivers of changes and to plan protection measures. Satellite delay doppler altimetry (DDA), also called SAR altimetry, provides improved results compared to conventional altimetry (CA).
Goal of this study is the evaluation of the state of art and enhanced dedicated coastal and inland retrackers to understand their limitation, plan improved processing and new missions.
The project HYDROCOASTAL brings together coastal and inland water zone measurements by SAR altimeter. Several retrackers, dedicated to one or both zones are applied in the processing of CryoSat-2 and Sentinel-3A/B altimeter data.
The University of Bonn contributes to the Hydrocoastal Project with the enhanced retracking and validation efforts. The Spatio-Temporal Altimeter Retracker for SAR altimetry (STARS) is an enhancement of the STAR retracker for low resolution mode (LRM) and uses the functional waveform model Signal model Involving Numerical Convolution for SAR (SINCS) to retrack the Delay Doppler waveforms.
The geophysical parameters estimated by retracking are evaluated in each region from all available retrackers. At Uni-Bonn the validation activities focus on the German Bight and Baltic Sea coastal region and include the Elbe estuary. The goal is to carry out a characterization of the product performance with estimation of the data accuracy. A cross-validation analysis of the new SAR products is performed against other altimeter products, model data and insitu data. The study area has been used for the validation of radar altimeter data in open ocean and near the shore in previous work.
The study presents and discusses the validation results based on the validation matrix which has been agreed with the project partners. The resulting statistics is compared in few cases with the statistics output from the in-house validation strategy matrix.
For almost thirty years, satellite radar altimeter missions, led by the National Aeronautics and Space Administration (NASA) and partners at the French space agency, Centre National d’Etudes Spatial (CNES), have measured ocean surface topography—the hills and valleys of the ocean surface—to produce a continuous data record from the altimetry reference mission series (TOPEX/Poseidon; Jason 1, 2, and 3; Sentinel 6 Michael Freilich). Other national and international partners have joined the effort, including the National Oceanographic and Atmospheric Administration (NOAA), the European Space Agency (ESA), the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the Canadian Space Agency (CSA), and the United Kingdom Space Agency (UKSA). This historical partnership continues to provide critical scientific understanding for key climate and research topics related to the ocean, coastal zones and land surface water issues, but also contributes to applied uses of the data record for operational agencies and to many practical and private sector applications. Significant investments made to these satellite systems are reaping benefits to society in a broad spectrum of operational and practical capacities—which can be expected to grow as the satellite assets expand.
The applications of data and information products derived from these missions includes weather prediction, coastal impacts (storm surge, coastal currents) assessments, tracking river discharge to the ocean and its interaction with the coastal regime, fisheries management, marine transport, and disaster risk management related to sea level change and flooding (both coastal and inland), water resources management, weather and climate forecasting and assessments, and biodiversity impacts, among others. We will highlight some work of the SWOT Early Adopters (EAs) – individuals and organizations who constitute part of the future user community of SWOT data. The EAs are developing data intake and management systems in order to streamline the use of SWOT sooner after launch, thereby shortening the time required to incorporate the data into their operational activities. We seek to communicate and illustrate the many applications of these satellite missions, highlighting the value of the resource investments to decision makers and to scientific and operational organizations.
Precise bathymetry estimates for shallow water areas are important for industry and modelling of phenomena such as tides, currents and water temperature. Obtaining direct measurements of bathymetry using airborne lidar or echo soundings from ship can be a tedious and expensive task. The data from the Advanced Topographic Laser Altimeter System (ATLAS) carried by ICESat-2 offers a fast an inexpensive way to obtain accurate coastal bathymetry, which can be used alone or together with global satellite imagery, such as from Sentinel-2, to create bathymetric maps in areas where there are no available in situ data from ships or airborne lidars.
The ICESat-2 altimeter uses a green laser with a wavelength of 532 nm, which can penetrate the water surface and provide information about the distance to the ocean bottom if the conditions allow it, i.e., preferably over clear and calm waters for depths up to 40 m.
Here we present the validation of a simple empirical method (Ranndal, 2021) to obtain bathymetry profiles using the geolocated photon data (ATL03) from ICESat-2. The bathymetry profiles obtained with the statistical method are compared to other bathymetry data sets in the Great Barrier Reef, Australia, and in the area around Sisimiut, Greenland, such as echo soundings and satellite derived bathymetry from WorldView2 imagery. Comparisons with machine learning derived bathymetry profiles reveal that the statistical model provides similar results.
Some of the challenges concerning the extraction of bathymetry profiles from ICESat-2 photon data, such as apparent multiple sea surfaces and the difficulty of distinguishing between bathymetry and sea surface returns in very shallow waters are also discussed.
Finally, the potential of using a similar method for extracting inland water bathymetry is addressed.
Ranndal, H., Sigaard Christiansen, P., Kliving, P., Baltazar Andersen, O., & Nielsen, K. (2021). Evaluation of a Statistical Approach for Extracting Shallow Water Bathymetry Signals from ICESat-2 ATL03 Photon Data. Remote Sensing, 13(17), [3548]. https://doi.org/10.3390/rs13173548
Various sources of sea level data (in-situ, remote sensing, hydrodynamic models) refer to different vertical datums. This characteristic tends to limit the full potential of understanding sea level dynamics. The fundamental concept of this study is that by using the geoid (equipotential surface of the Earth, thus representing an idealistic sea level) referred network of tide gauges, a coherent comparison can be made with all other sea level sources. This methodology is tested in the dynamic and complex coastal zone of the Baltic Sea, whereby satellite altimetry has an essential role in validating the results. In this study, the ESA funded Baltic+ SEAL project satellite altimetry data (specifically adapted for coastal and sea ice conditions in Baltic Sea) plays an essential aspect in verifying the methodology applied for both coastal and offshore areas.
The methodology uses a dense network of 73 tide gauges that refer to the geoid and span the entire Baltic Sea coastline. A comparison is made between tide gauge data and regional hydrodynamic models (whose vertical datum is unspecified), which shows a temporal and spatial bias. After correcting the hydrodynamic model by spatial interpolation methods (e.g., linear, inverse distance weighted, least-squares collocation), the corrected hydrodynamic model data can be validated with other independent sources (e.g., satellite altimetry, shipborne GNSS profiles, airborne laser scanning). The Sentinel-3A and Jason-3 along track 20 Hz data confirm the corrected model to be more accurate, with a root mean square error of 3‒5 cm. Similar validation was performed using airborne laser scanning and shipborne GNSS profiles, with standard deviation estimates generally varying around 2–4 cm. The results show that satellite altimetry is vital in validating the offshore areas where tide gauge data are unavailable, and model data may be questionable. The applied methodology allows not only to obtain more realistic sea level data in the coastal and offshore areas but also identify: (i) areas of discrepancies in hydrodynamic and geoid models; (ii) satellite altimetry problematic areas and seasons; (iii) uncertain tide gauges. The intention is that the proposed methodology can be applied in other sea areas to enhance our understanding of realistic sea level data from coast to offshore, which is vital for marine engineering, navigation, and climate studies.
It is well-known that climate modes such as the El Niño Southern Oscillation (ENSO) or Indian Ocean Dipole (IOD) lead to extreme sea levels in specific ocean regions. However at the regional scale, sea level responds to changes in the climate system along different pathways, e.g. via ocean precipitation-evaporation flux, ocean warming, or freshwater flux following from melt events. Therefore, in this study we assess for the first time extremes in the different contributors to sea level change in a consistent way, from radar altimetry and space gravimetry data.
In general, sea level change is driven by mass influx into the ocean and volumetric (steric) expansion of the sea water. Mass changes include melting of the ice-sheets in Greenland and Antarctica, as well as land glaciers but also variations in terrestrial hydrology and, on regional scales, internal mass transports within the ocean. Steric sea level results from variations in temperature and salinity affecting the density of the ocean water leading to expansion or contraction of the water column.
Significant events, such as ENSO, severely impact individual mass and steric contributions leading to subsequent consequences in other regions of the Earth. Furthermore, the steric and mass driven contributions are linked together through the energy cycle, as a warmer ocean region can induce mass transports through more precipitation over land. Therefore, we suggest that separating the total sea level change observed by satellite altimetry into individual ocean mass changes, observed by the Gravity Recovery and Climate Experiment (GRACE) satellite mission, and steric sea level contributions, observed by in-situ profiling floats from the Argo program, provides a new way for understanding the processes underlying strong climate modes. In contrast to individually processing each data set, the global fingerprint inversion (Rietbroek et al., 2016) enables a consistent combination of satellite altimetry and GRACE gravity observations resulting in a detailed global and regional sea level budget. The mass contributions are separated on basin scales and steric sea level change is estimated for two ocean layers (upper 700m and the deep ocean) with a spatial resolution of about 0.25 degrees.
In this work, we analyze the results from the global fingerprint inversion on individual mass and steric components in order to study the response with respect to some selected extreme ENSO and IOD events. The analysis is conducted on global and regional scales in order to improve the understanding of the interconnected processes. This will help to better understand past extreme events and aid prediction of those events in the future. Preliminary results from comparing ocean heat transports, derived from the steric sea level change, reveal significant differences between the observation driven inversion solution and a state of the art ocean reanalysis product, especially before 2005.
Tropical cyclones (TCs) are characterized by strong rotational winds with high spatial and temporal variability (in speed and direction). As this kind of event generates extreme waves that impact maritime navigation, coastal areas and heighten the ocean-atmosphere interactions, the detailed description of directional wave spectra under TC conditions is a topic of great relevance to oceanographic research and engineering applications. However, as mentioned by Kudryavstev et al., (2015), a classical wave modeling often fails to correctly represent the wave field under TC conditions because the classical wave growth laws established for quasi-stationary or homogeneous situations are no longer valid under these fast-moving systems. Several studies have shown that the wave fields generated by TCs are asymmetric with higher significant wave height in the right front quadrant (with respect to the TC track) (Young, 2006; Kudryavstev et al., 2015, 2021; Tamizi & Young 2020; Shi et al., 2021). Simulation studies have shown that this asymmetry increases with the TC displacement speed to a certain limit because waves are trapped and remain under the action of the high winds in the frame moving with the cyclone (Kudryavstev et al., 2015, 2021). Correlatively, in the right forward quadrant of the storm the wave spectra look like typical spectra of fetch limited conditions with a unimodal shape (Young, 2006; Hu and Chen, 2011). However, in other quadrants, and at distant regions of the TC center, the wave spectra are much more complex. For example, the spectral shape tends to become bi- and tri-modal in direction although they may remain unimodal in frequency in the rear and the front left quadrants (Young 2006; Hu and Chen, 2011; Esquivel-Trava et al., 2014; Hwang & Walsh 2018; Tamizi & Young 2020). It is known that the spectral shape is partially controlled by the nonlinear interactions between waves, which are approximately represented in the wave models (Hasselman et al.,1985). Young (2006) suggested that in these extreme events, the nonlinear interactions are the dominant process that controls the shape of the omni-directional spectra event if the wave energy is contained in several wave components with different directions. But the modelling work of Hu & Chen (2011) suggests that all the source terms could have an impact on the repartition of the wave energy.
TCs are relatively local events in time and space and their observations from satellite or in-situ devices are rather scarce. Recently (October 2018), the Surface Waves Investigation and Monitoring (SWIM) instrument, on-board the China-France Oceanography Satellite (CFOSAT) mission, was launched in space and allow us to better observe these extreme events. Indeed, SWIM provides directional ocean wave spectra every 90 km with detailed information about the waves with dominant wavelength between 70 and 500 m (Hauser et al, 2021). The high quality of SWIM has been demonstrated by a study on the assimilation of SWIM spectra in a wave model, showing significant improvements of the model results in the Southern Ocean which is characterized by waves highly forced by strong winds (Aouf et al., 2021). Therefore, SWIM data are useful to analyze the directional properties of the waves but also the probability of extreme waves at the global scale (Le Merle et al., 2021).
One objective of our study is to characterize the shape of the wave spectra in TCs by analyzing several spectral shape parameters, such as the directional spreading of the directional spectra and the “Peakedness” parameter of the omni-directional spectra introduced by Goda (1976) (noted Qp here after). To do so, we have collocated SWIM observations with 59 different events at the global scale, between May 2019 and August 2021, and analyzed the wave parameters in the different quadrants for different distances with respect to the TC center and for different translation speeds.
Moreover, as extreme sea state conditions generated by TCs can influence the TC intensification and have an impact on air-sea interactions and on ocean circulation (Moon et al., 2008; Bruciaferri et al., 2021; Hughes et al., 2021), the second objective of our study is to analyzed the Stokes drift estimated from the SWIM spectra.
Preliminary results show that the asymmetry of the wave field is present not only for the significant wave height but also for the dominant wavelength and for the spectral shape parameters such as the directional spread. We found that the shape evolution of the ocean wave spectra depends on TC characteristics such as the ratio between the wind speed and the translation speed of the TC. Furthermore, we have analyzed the relationships between the ocean wave parameters and the wave development stage determined with wind data provided by the scatterometer (SCAT) also on-board the CFOSAT mission.
References:
Aouf, L., Hauser, D., Chapron, B., et al. (2021). New directional wave satellite observations: Toward improved wave forecasts and climate description in Southern Ocean. Geophysical Research Letters, 48, doi:10.1029/2020GL091187
Esquivel-Trava, B., Ocampo-Torres, F. J., and Osuna, P., (2014). Spatial structure of directional wave spectra in hurricanes. Ocean Dynamics, doi:10.1007/s10236-014-0791-9
Goda, Y. (1976). Proceedings of the first behavior of offshore structure Conference76. On wave groups.
Hasselmann, S., Hasselmann, K., Allender, J.H., and Barnett, T.P. (1985). Computations and parameterizations of the nonlinear energy transfer in a gravity-wave spectrum, Part II: Parameterizations of the nonlinear energy transfer for application in wave models. J. Phys. Oc., 15, pp. 1378-1391
Hauser, C. Tourain, L. Hermozo, et al. (2021). New observations from the SWIM radar on board CFOSAT: instrument validation and ocean wave measurement assessment, IEEE Transaction on Geoscience and Remote Sensing, 2021, 59 (1), pp.5-26., doi: 10.1109/TGRS.2020.2994372
Hu, K., and Chen, Q. (2011). Directional spectra of hurricane‐generated waves in the Gulf of Mexico, Geophys. Res. Lett., 38, L19608, doi:10.1029/2011GL049145
Hughes, C. J., Liu, G., Perrie, W., & Sheng, J. (2021). Impact of Langmuir turbulence, wave breaking, and Stokes drift on upper ocean dynamics under hurricane conditions. J. Geophys. Res. Oceans, 126, doi:10.1029/2021JC017388
Hwang P. A. and E. J. Walsh (2018), Propagation Directions of Ocean Surface Waves inside Tropical Cyclones, Journal of Physical Oceanography, 2018, p 1495–1511, doi: https://doi.org/10.1175/JPO-D-18-0015.1
Kudryavtsev, V., Golubkin, P., and Chapron, B., (2015), A simplified wave enhancement criterion for moving extreme events, J. Geophys. Res. Oceans, 120, 7538–7558, doi:10.1002/2015JC011284.
Le Merle, E., Hauser, D., Peureux, C., et al. (2021). Directional and frequency spread of surface ocean waves from SWIM measurements, J. of Geoph. Research: Oceans, 126, doi:10.1029/2021JC017220
Moon, I.-J., Ginis, I., and Hara, Y., (2008). Impact of the reduced drag coefficient on ocean wave modeling under hurricane conditions, Mon. Weather Rev., 136, 1217–1223, doi:10.1175/2007MWR2131.1
Shi, Y., Du, Y., Chu, X., Tang, S., Shi, P., and Jiang, X. (2021). Asymmetric wave distributions of tropical cyclones based on CFOSAT observations. Journal of Geophysical Research: Oceans, 126, doi:10.1029/2020JC016829
Tamizi A. and I. R. Young (2020), The Spatial Distribution of Ocean Waves in Tropical Cyclones, Journal of Physical Oceanography, 2020, DOI: 10.1175/JPO-D-20-0020.1
Young I., (2006), Directional spectra of hurricane wind waves, J. of Geophys. Res, vol 111, C08020, doi:10.1029/2006JC003540
Marine Heat Waves (MHWs) are anomalously warm events that can significantly impact marine ecosystems and related services, and their mean intensity and duration have globally increased in the last century. Since the systematic study of MHWs is rather recent, our aim is to extend and improve the widely adopted definition and characterization of MHWs based on climatological thresholds. We focus on the detection of MHWs in the Mediterranean Sea, whose role as a global warming hotspot is well recognised, with the aim of evaluating to what extent the Mediterranean positive trend in the last decades affects MHW detection. We analyse two daily gap-free SST products, namely the ESA-CCI dataset at ~5km regular grid and the CMEMS Mediterranean SST dataset at ~4km, both covering the period from 1982 to present. In addition to computing MHW detection on original SST datasets, we accomplish the analysis on detrended SST data also, thus removing the effect of the trend in the detection, and more coherently represent the variability of anomalously warm events themselves. This preprocessing step is achieved with two different decomposition techniques, that is X11 seasonal decomposition and Singular Spectral Analysis. We also introduce the definition of "effective MHW" with the aim of recovering the effective SST value while keeping the detection not biased by long-term variations. This approach can be useful to estimate eventual biological impacts such as evaluating if overall conditions become inhospitable for specific marine species.
A catalogue of the main MHWs occurring in the Mediterranean Sea over the period under analysis is presented, by including only events impacting at least 15% of the basin and lasting at a minimum of 30 days. This procedure allows highlighting the large scale and long lasting MHWs which reasonably have major impacts. With this methodology, we evidence about 20 main events and give an overview of their main characteristics.
Future work will include the analysis of subsurface temperature and other oceanic essential variables (such as ocean heat content) in order to examine in more detail the processes which lead to the generation, the sustenance, and decay of MHWs, together with evaluating compound biological events for the understanding of the overall impacts on the marine ecosystem.
The full abstract is provided in the attached file. Here is a brief summary.
Spaceborne observations over extreme atmospheric events at global scale such as TC, ETC, PL and medicanes are a key component in extreme events monitoring and in anticipating appropriate risk mitigation and emergency response at landfall. In particular, the Tropical Cyclone Program (TCP) of the World Meteorological Organization (WMO) allows tropical cyclone forecasters to access various sources that provide conventional and specialized data/products, including those from Numerical Weather Predictions (NWP) and remote sensing observations, as well as forecasting tools on the development, motion, intensification and wind distribution of tropical cyclones.
Recent progresses in SAR processing have shown the potential of C-band SAR data acquired in dual-polarization for estimating at high resolution (1 km) an ocean surface wind field [1, 2], including extreme events such as major hurricanes (category -3 to -5) [3]. Comparison with SFMR for yield to high correlation (R > 0.90), small bias ( < 0.5m.s-1) and RMSE ( < 5m.s-1) [4]. However, to date there is no operational strategy to ensure SAR acquisitions over Tropical Cyclones with existing C-band SAR (Sentinel-1, Radarsat-2 or Gaofen-3).
CYMS (Cyclone Monitoring Service based on Sentinel-1) is a of 24 months ESA-funded project (2020). The main objective of CYMS is to scale up an operational service for extreme event monitoring, in view of its potential integration as part of a Copernicus Service. Since 2016, experts from CLS and IFREMER have developed a new strategy, in collaboration with the European Space Agency (ESA), in order to observe those extreme situations with Sentinel-1 Copernicus mission and Radarsat-2 satellite. In collaboration with space agencies and meteorological institutes, CLS an IFREMER use their forecast tracks to maximize the observations over such events and characterize the associated ocean surface wind field from the C-band Synthetic Aperture Radar. The demonstration of such a service has been operated in 2020 to provide the wind measurements in near-real-time and to guarantee access to the full archive of SAR wind products to the users. In parallel, a user survey has been undertaken to collect users' requirements. This paper provides the main outcomes of CYMS project.
Polar lows are small but intense maritime cyclones that forms poleward of the main baroclinic zone. They pose a real threat to maritime activity and coastal communities in the Arctic, North Atlantic and North Pacific. Since they develop very rapidly in remote polar regions, they are difficult to predict and monitor, thus early detection is highly desirable.
In this work, we demonstrate that image examples found in the Sentinel-1 archive are sufficient for training a deep learning model capable of detecting polar low like cyclones with a very high accuracy.
Firstly, we present a labeled training dataset that is used for training the deep learning model. The dataset consists of 318 (positive) image examples of intense but small scaled maritime low pressure systems and 1686 (negative) image examples representing a normal sea state. The positive examples are constructed by filtering meteorological reanalysis data, specifically the ERA-5 sea surface pressure field. Criteria on the strength and size of the sea surface pressure is considered forming candidate regions used for searching coincident Sentinel-1 data.
Secondly, we demonstrate that a supervised deep neural network architecture (an Xception-style convolutional neural network) trained on the dataset is capable of discriminating the images containing low pressures to images of the normal sea state. The accuracy of the network is very high, obtaining a F1 score of 0.94. Special consideration is taken to the input image resolution, class imbalance and the relatively small size of the training dataset. In addition, we analyse the trained neural network in terms of so called explainable AI techniques, specifically class activation maps and integrated gradients.
The results show promise for using deep learning models on SAR data for operational detection of polar lows.
Alterations to phytoplankton biomass can influence the survival and fitness of organisms at higher trophic levels that provide economic support and services for maritime nations. The Red Sea, a tropical marine ecosystem, holds an extensive coral reef framework, supporting a highly biodiverse environment that needs to be monitored and sustained. The investigation of ecological indicators (such as phytoplankton biomass) enables a quantitative assessment of the status of marine ecosystems. However, the long-term response of phytoplankton to rising temperatures, and their response to extreme events (such as marine heatwaves -MHWs and marine cold spells-MCSs), needs to be further investigated. In this direction, an innovative, inter-disciplinary approach was used combining contemporary oceanographic datasets acquired via satellite remote sensing, in situ data (research cruises and Argo-floats) and model outputs. We conducted a long-term analysis of satellite-derived Sea Surface Temperature (OSTIA) time series (1982 -2018) to identify extreme warm and cold events, and assess their spatiotemporal distribution. This global SST analysis provides daily averaged fields of SST at a 1/20° spatial resolution (~5 km) and has one of the highest spatial resolutions currently available. After identifying areas and periods within the Red Sea where such extreme events occurred, we investigated phytoplankton dynamics, using satellite-derived chlorophyll-a concentration (Chl-a – a proxy of phytoplankton biomass). We used a regionally tuned reprocessing of the OC-CCI project (ESA), at temporal resolutions ranging from daily to annually, and a spatial resolution of 1 km (years 1997-2018). We observed an increase in MHW events and their duration over the study period, along with a decrease in MCS events. During most of the MHWs (MCSs), the spatial coverage and the magnitude of Chl-a concentration were substantially decreased (increased). In situ datasets (research cruises and Argo-floats) that describe the concurrent biophysical changes, along with model outputs, further supported our results. This research steps towards understanding the potential ecological impacts of MHWs and MCSs in a typical tropical marine ecosystem.
The occurrence of Marine Heatwaves (MHWs) has intensified its frequency and intensity over the past years due to climate change and ocean warming. This phenomenon is characterized by abnormal sea surface temperature (SST) conditions, where SST stays above a climatological temperature threshold for at least five consecutive days. This anomaly pattern is associated with various impacts on marine ecosystems such as an increase in coral bleaching events, mass mortality of organisms, and loss of benthic habitat. This study aims to assess the occurrence of Marine Heatwaves on Brazilian coral reefs, tracking its spatiotemporal distribution, intensity, and potential relationship with historical coral bleaching events in the region. NOAA Coral Reef Watch (CRW) daily global 5km SST product, also known as CoralTemp, was extracted for 180 coral reef sites along the Brazilian continental margin in the period from 1985 to 2020. CoralTemp SST data were statistically compared against in situ buoy data available from the Brazilian National Buoy Program - PNBOIA (https://www.marinha.mil.br/chm/dados-do-goos-brasil/pnboia-mapa) in terms of the correlation coefficient (R = 0.99), root mean square error (RMSE = 0.55ºC) and bias (-0.06ºC). CoralTemp SST time series were decomposed in order to obtain its trend, seasonal and residual components. We then performed a normalization procedure by subtracting the seasonal component from the original time series, removing the influence of periodical variability in order to detect only anomalous events. Daily positive SST anomalies were calculated considering the deviation from the climatological mean at each coral reef site. Then, MHWs were identified on the basis of the 90th percentile threshold if presenting a minimum duration of five consecutive days. Additionally, intervals of two or less days between continuous events above the 90th percentile were considered as part of the same MHW. After the MHW identification, we calculated the intensity, duration and cumulative intensity (ºC days) for each event. These events were also classified according to their severity, considering the maximum intensity and the 90th percentile deviation from the climatological mean. In order to verify the potential relationship with coral bleaching events, the identified MHWs were grouped into five marine ecoregions according to the location of the coral reef sites: Eastern Brazil (EST), Trindade and Martim Vaz islands (TMZ), Northeastern Brazil (NST), Fernando de Noronha island and Atol das Rocas (FNA) and the Amazon region (AMZ). Then, the identified MHWs were matched with the bleaching events reported in the literature and/or registered in public databases. Preliminary results indicated that the average occurrence and intensity of MHWs on Brazilian coral reef sites varied greatly according to their marine ecoregion. An average of 78, 58, 70, 48, 63 MHWs were spotted at EST, TMV, NST, FNA and AMZ ecoregions, respectively. The intensity of MHWs were higher at coral reef sites farther from the equatorial region, reaching a maximum anomaly peak of 2.70 ºC at TMV and 2.62 ºC at EST. The cumulative intensity observed for TMV, and EST were also > 10^4 (ºC days). Although northern coral reef sites also presented severe MHWs, their maximum anomaly peak and cumulative intensity were < 2 ºC and < 10^4 (ºC days). Also, 50% of the MHWs here identified were detected in the last 10 years, indicating a strong increase in the probability of occurrence of these events. Regarding coral bleaching, all the most intense and persistent MHWs occurrences seem to be associated with at least one reported bleaching event per marine ecoregion, indicating that although South Atlantic coral reefs could be more resistant to positive temperature anomalies, periods of extreme and persistent warming are threatening their health and conservation. Ultimately, our results show that the occurrence of MHWs on the Brazilian coral reefs are intensifying over the years, mainly in the southern communities. Further analyses are also being conducted to better understand the impacts of less severe MHWs on these ecosystems.
Sea surface winds in hurricanes can be mapped by the spaceborne synthetic aperture radar (SAR) images and/or the Soil Moisture Active Passive (SMAP) radiometer. The spatial resolution of SAR is high (~ 1 km), while it is coarse from SMAP. Therefore, methodologies are proposed to estimate the hurricane structures from SAR and SMAP winds respectively. Due to the coarse resolution of SMAP data, a symmetrical structure model is proposed to estimate hurricane center location associated with the radius of maximum wind (RMW) and intensity, purely from ocean winds observed by SMAP radiometer. Taking advantages of high spatial resolution, the inflow angle asymmetry can be estimated from SAR besides the hurricane parameters as above. The results are validated by comparing 28 SMAP hurricane wind fields to the airborne Stepped Frequency Microwave Radiometer (SFMR), the aircraft measurement (f-deck) and the best track (BT) data, and by a systematic analysis of 130 SAR images, collected by RADARSAT-2, and SENTINEL-1.
Spaceborne synthetic aperture radar (SAR) wide swath quad-polarization (HH+HV+VH+VV) observations of Hurricane Epsilon are first presented and analyzed, using the quasi-synchronous SAR imagery acquired from C-band Radar Constellation Mission (RCM) and RADARSAT-2. These measurements clearly show that the denoised HV- and VH-polarized normalized radar cross section (NRCS) show great consistency. The results also show the NRCS at HV- and VH-polarizations are less sensitive to incidence angle and wind direction than those at HH and VV for hurricane -force winds. For large incidence angles and high wind speeds, the sensitivity of HH-polarized NRCS to wind speed is higher than that of VV. Moreover, HH- and VV-polarized NRCS gradually loss wind direction dependence at very high winds. The 3 minutes time interval between two SAR acquisitions, also allow for using a direct comparison of HV- and VH-polarized images to investigate the variations of high-resolution backscattering within the vortex and, thus to possibly reveal most dynamical areas. In this study, an asymmetrical dynamic is observed in the eye of Hurricane Epsilon. The rain impacts on quad-polarized NRCS are also examined using collocated rain rates from Global Precipitation Mission (GPM) and wind speeds from Soil Moisture Active Passive (SMAP). Rain-induced significant NRCS attenuations are about 1.7 dB for HH and VV and 2.2 dB for HV and VH, when rain rate is 20 mm/hr. These attenuations are possible associated with rain-induced turbulence and atmospheric absorption. In view of preparing the next generation of dual-polarization scatterometer (SCA) onboard MetOp-SG, this work shows that the analysis of collocated RCM and RADARSAT-2 hurricane observations is a unique opportunity to have synoptic and joint C-band measurements of the ocean surface in quad-polarization.
The US National Oceanic and Atmospheric Administration (NOAA) routinely operates “hurricane hunter” flights in the North Atlantic and North-East Pacific. Each flight is equipped with a Stepped Frequency Microwave Radiometer (SFMR) and GPS dropsondes for measuring surface winds, rain rate, sea surface temperature and vertical profiles of wind, pressure and temperature. Data from these in-situ measurements are freely available and are used for a satellite multi-sensor wind data inter-calibration procedure. This study is carried out in the framework of the ESA OCEAN+EXTREMES MAXSS project, and its goal is to obtain a consistent extreme wind data record for scatterometers and radiometers over the period 2010-2020.
Over the mentioned period, there is a varying constellation of satellite scatterometers and radiometers. In particular, the following systems have been inter-calibrated: the scatterometers ASCAT-A, ASCAT-B, ASCAT-C, OSCAT (on OceanSat-2), OSCAT-2 (on ScatSat-1), Rapidscat (RSCAT), HY-2A and HY-2B, and the radiometers SMOS, SMAP, Windsat and AMSR-2. Collocated SFMR winds (calibrated with dropsondes) are used as extreme wind reference for inter-calibration purposes. Furthermore, this study also requires the use of to optimize the satellite-SMFR collocations.
Storm Best Track (BT) data are used to collocate SFMR wind data in storm-motion centric coordinates with satellite data, and generate a Satellite-SFMR paired database. During the collocation process some thresholds are set, such as maximum time difference (Δt) and spatial difference (Δx) between measurements. After collocation, a comprehensive analysis of the different flags to apply is undertaken. Rigorous flagging can substantially reduce or even remove extreme wind measurements, while permissive flagging can introduce noise and outlier measurements into the filtered dataset. Therefore, the maximum SFMR rain rate threshold, several quality flags from different sensors, and the maximum Δt and Δx are analyzed for SFMR, C-band scatterometers and Ku-band scatterometers. The results show that a SFMR rain threshold of 10 mmr/h filters out most of the measurements around the tropical cyclone eye, and so the most extreme winds. A rain rate of 20 mmr/h is found to be the optimal threshold, since higher rain rate values only add few extra measurements at a cost of higher noise. A Δt of 1 hour is found to be too strict, resulting in a much reduced number of total collocations. Using storm-motion centric coordinates, one assumes that the structure of the hurricane with respect to its direction of motion doesn’t significantly change within a certain period of time. It is found that in general, a "Δt < 3" hours allows three times as many collocations as a "Δt < 1" hour, while the collocation error (noise) does not substantially increase. A Δx threshold within the grid resolution of the satellite data is set to allow only the closest collocations in space to be used. The SFMR quality flag is found to be strongly correlated with the rain rate, and is therefore not used. A similar strategy is used to select those scatterometer and radiometer quality flags that optimize the trade-off between preserving extreme wind observations and filtering poor-quality winds. In order to fairly correlate high-resolution SFMR measurements with lower resolution satellite measurements, SFMR acquisitions are up-scaled to the satellite spatial resolution.
Filtered paired datasets for each remote sensor is used to obtain the re-calibration (regression) algorithm for winds above 10 m/s approximately. For stable and long-term sensors like ASCAT-A, SFMR (dropsone-based) calibration differences are shown for different periods of time. Such differences though are mainly present at relatively low wind speed regimes, which are not used for recalibration purposes. Recalibrated datasets from remote sensors are then comparedwith SFMR. The results showed low wind speed bias ("< 1 m/s") and high (Pearson) correlation ("> 0.87"). A consistent satellite-derived extreme wind dataset is therefore produced in the period 2010-2020. Note that for those satellite systems with relatively small number of SFMR collocations, e.g., RSCAT, HY-2A and HY-2B, the recalibration curves present a larger uncertainty and should be revisited in the future provided that the number of collocations is increased.
Finally, in the framework of the MAXSS project, triple collocation analyses (SFMR-satellite-NWP) will then be performed to characterize the errors of the different satellite-derived extreme wind sources. For such purpose, spatial variance analyses will be carried out in order to estimate the representativeness errors, i.e., the common true variance resolved by SFMR and satellite winds but not by NWP model winds. The preliminary results of this error analysis will be presented at the conference.
Marine heatwaves (MHWs) are prolonged discrete anomalously warm water events that last for more than five successive days and can be described by their duration, intensity, rate of evolution, and spatial coverage. These episodes of large-scale anomalously high ocean temperatures can have many impacts on the marine ecosystems and major implications for the fisheries as well. As a result of the anthropogenic climate change, MHWs have been observed in many parts of the world's oceans and their intensity and frequency are expected to increase in the future. This work investigates the 2019 MHWs that occurred in the Western Mediterranean Sea (WMED) between June and December, the net air-sea heat exchange anomaly associated with these MHW events, their possible atmospheric drivers, and their effect on the Chlorophyll-a concentration. The results revealed that, in the WMED, six MHW events took place during the study period, their maximum intensity ranged from 2.25 oC to 6.42 oC, and their duration fluctuated between 5 and 20 days. In terms of these MHW events severity, three of the defined events were categorized as strong events in which the SST exceeded twice the threshold, and the other events were classified as moderate events. Moreover, the fluctuation of the net heat flux anomalies during the study period were linked to the occurrence of the MHWs, the high SSTa of the MHW events were combined with positive (gain) heat flux anomaly but, during one of the events, the high SSTa induced a negative (loss) heat flux anomaly. In addition, a combination of some atmospheric conditions such as high air temperature (> 25 oC), high MSLP (> 1014 hPa), and low to no wind shear led to the formation of the MHWs at the WMED basin. The relation between the high SSTa associated with The MHW events and the Chlorophyll-a concentration have been tested. The study found that the fluctuations of the Chlorophyll-a concentration were related to the magnitude of the MHWs intensity, and a moderate but significant inverse correlation has been found between them.
The High altitude Aerosols Water vapour and Cloud (HAWC) suite has been proposed as a Canadian contribution to NASA’s Atmosphere Observing System (AOS). Upper tropospheric and stratospheric aerosols and clouds are a critical component of Earth’s radiative budget, yet their interactions and feedbacks remain an important driver of uncertainty in climate projections. One component of HAWC, The Aerosol Limb Imager (ALI), will provide two-dimensional hyper spectral images of the Earth's limb from 600 to 1500 nm; measuring vertical profiles of aerosol and thin clouds from the mid-troposphere through the stratosphere. Along with two other instruments, the Spatial Heterodyne Observations of Water (SHOW), and Thin Ice Cloud and Far InfraRed Emissions (TICFIRE), ALI will provide a full suite of complementary measurements of clouds, aerosols, water vapour and their interactions in the upper troposphere and stratosphere.
This work examines the ALI level 2 retrieval products using a full end-to-end simulator. A software model is developed with realistic optics, imaging sensors and electronic properties to accurately estimate noise and imaging characteristics. Simulator input is generated using two atmospheric scenes. First, a two-dimensional orbital curtain produced from CALIPSO and OMPS-LP measurements to investigate aerosols, high clouds and their interactions under realistic conditions. Second, a high-resolution, model-generated, three-dimensional scene used to investigate imaging capabilities and cross-track information. With these scenes, aerosol retrievals in the presence of cirrus clouds are explored, as is the ability to discriminate aerosol types. Additionally, particle size information provided by the spectral and polarization measurements is investigated.
Climate change is a global condition with local impacts that are already driving collapse trajectories in terrestrial and marine ecosystems worldwide. Coastal and nearshore areas are amongst the most productive and diverse ecosystems on Earth, supporting important societal services while being under growing anthropogenic pressures. A further understanding of climate change impacts in the highly dynamic 3D coastal ocean is of utmost importance and could be achieved only if synoptic Earth Observation are available at high-resolution and over multiple spatial and temporal scales.
Due to its geography and semi-enclosed character, the Mediterranean Sea is a climate change hot spot. Satellite SST observations have revealed the rapid warming of its surface water, associated to a dramatic increase of the frequency and magnitude of large-scale marine heatwaves (MHW, defined as periods of extremely warm temperatures at both surface and subsurface) with negative impacts on physical habitats, biogeochemical fluxes and marine life.
The northwestern coastlines, which are much affected by MHWs despite being one of the coldest Mediterranean eco-regions, exhibit episodically cooling and warming events of short durations but large amplitudes. In fact, coastal dynamics is extremely responsive to intermittent wind events, triggering numerous sporadic upwelling and downwelling source points along the coast, which are major drivers of the fine-scale thermal variability.
Here we analyze the physical drivers governing the intensity, longevity and vertical manifestation of summertime MHWs in the nearshore marine habitats of the NW Mediterranean Sea over the last decades. We conduct multi-sensor analysis based on multi-decadal high-resolution satellite Sea Surface Temperature, decadal to multi-decadal high-frequency in-situ temperature time-series from the T-MEDNet network and reanalysis atmospheric products (Sea Surface Winds, air-sea fluxes, etc…). By analyzing air-sea heat and momentum fluxes, stratification and coastal wind-driven upwelling/downwelling indices in comparison with MHW statistics, we disentangle the major drivers of the most conspicuous regional MHWs, some of which have been already associated to increased mortality of benthic organisms. We document multiple imbricated scales of influence and we further explore the interactions between MHWs and the most active upwelling and downwelling cells. We show that the contrasted vertical dynamics of MHWs, exhibiting different local signatures at the surface and the subsurface, are due to the prominence of wind-induced coastal processes. Our results shed lights on the physical processes that modulate the local expression of MHWs and their deleterious ecological impacts. Our approach integrating several Earth Observation contributes to an improved evaluation of coastal vulnerability and provides scientific knowledge to best design adaptation measures (e.g. marine protected areas) for marine conservation at local to regional scales.
Atmospheric corrections introduce uncertainties in bottom-of-atmosphere Ocean Colour (OC) products derived from satellite observations. In this study, we analyse the uncertainty budget of the SeaDAS atmospheric correction algorithm. Atmospheric correction algorithms depend on ancillary variables (such as meteorological properties and column densities of gases), yet the uncertainties on these variables have not been studied previously in detail. In order to analyse these uncertainties for the first time, we use the variance in ERA5 reanalysis data to quantify the uncertainties and a Monte Carlo method to propagate these uncertainties to remote sensing reflectances. On an example data set, wind speed and relative humidity are found to be the main contributors (among the ancillary parameters) to the remote sensing reflectance uncertainties.
Just like any other physical measurement, Ocean Colour (OC) products require estimates of their uncertainties in order to be meaningful. Without this estimate, there is no way to assess their quality or understand how far the true value may be from the measured value. Radiometric uncertainty lower than 5% (k=2) in the blue and green spectral regions along with 0.5% (k=2) decadal stability has been listed by the Global Climate Observing System (GCOS) as requirements for water-leaving radiance as an Essential Climate Variable.
There are various sources of uncertainty in OC products. Two of the main contributions are the radiometric properties and stability of the sensor and the uncertainties in the atmospheric correction, the process that determines the water-leaving radiance (or reflectance) from the TOA radiances. The uncertainty of the atmospheric correction depends on the algorithm used as well as on the modelled distribution of gases and aerosols in the atmosphere.
There are many software packages which successfully implement atmospheric correction algorithms. For our study, to perform a metrological analysis of the algorithm we opted for an open-source algorithm, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Analysis System, SeaDAS, developed by NASA. SeaDAS uses reanalysis datasets to obtain the required ancillary data (mostly information about meteorological conditions and ozone concentration). Reanalysis datasets such as the one produced by National Centers for Environmental Prediction (NCEP) or by the European Centre for Medium-Range Weather Forecast (ECMWF) are created using the data assimilation schemes of numerical weather prediction (NWP) models which assimilate observations over the entire reanalysis period.
In this study, we analyse the uncertainty budget of the SeaDAS “l2gen” atmospheric correction algorithm. We follow a metrological approach, established in the Fidelity and uncertainty in climate data records from Earth Observations (FIDUCEO) project, where the error sources are identified in an uncertainty tree diagram and briefly discussed. The resulting uncertainty tree diagram is shown below. In this figure, it can be seen that the atmospheric correction algorithms depend on multiple ancillary variables (such as the surface wind speed (WS), sea level pressure (SLP), precipitable water vapour (PW), relative humidity (RH), nitrogen dioxide concentration (NO2) and total ozone concentration (O3)). Uncertainties in ancillary data will thus impact the atmospheric correction, yet this topic has been little addressed so far in the OC community.
In order to analyse these uncertainties, we use the spread in the ensemble of the ECMWF Reanalysis 5th Generation (ERA5) data as an estimate for the uncertainty in the ancillary data, which is then propagated to uncertainties in remote sensing reflectances using a Monte Carlo (MC) approach and the SeaDAS atmospheric correction algorithm. A number of example SeaWIFS scenes were selected for the application of this approach. In addition to varying all the ancillary parameters at once (in the MC approach), we also investigate varying only one of the ancillary parameters at a time (e.g. windspeed is varied according to the ERA5 ensemble, yet the remaining parameters are kept at the reanalysis mean).
Using the example scenes, we show that the WS and RH (through its interaction with aerosol model selection) are the largest contributors to the uncertainty in the atmospheric correction, together adding typically about 1% uncertainty for 412nm, increasing to 2% for 555nm and above. The uncertainties in the remote sensing reflectance due to WS and SLP typically show a strong spatial correlation with the uncertainties in WS and SLP, respectively. The other ancillary variables show no such strong correlation. The uncertainty in remote sensing reflectances introduced by varying RH is complex. In conclusion, we recommended that uncertainties on ancillary variables (especially those on WS and RH) are included when determining uncertainties on ocean colour remote sensing reflectances.
Remote retrieval of near-surface chlorophyll-a (Chla) concentration in small inland waters is challenging due to substantial in situ optical interferences of various water constituents and uncertainties in the atmospheric correction process. Although various algorithms have been developed or adapted to estimate Chla from moderate-resolution terrestrial missions (~ 10 – 60 m), there remains a need for robust algorithms to retrieve Chla in small inland waters. Here, we train and test a support vector regression (SVR) model, which takes in satellite-derived remote-sensing reflectance spectra (R_rs^δ) as input for Chla retrieval in a small eutrophic lake, Buffalo Pound Lake (BPL), SK, Canada. The proposed model leverages the visible and near-infrared bands of Sentinel-2 and Landsat-8 images (400 – 800 nm) and relies on a multi-year dataset of in situ Chla (N < 200) for training. Following validation against in situ Chla measurements over seven ice-free seasons (2014-2020), the SVR model retrieved Chla with a 35% error, outperforming both locally tuned, R_rs^δ-fed empirical models (Normalized Difference Chlorophyll Index, 2- and 3-band models, and OC3) and a recently-developed Mixture Density Network (MDN) by 15% – 65%, while exhibiting comparable performance to a locally trained MDN (LMDN). Moreover, a stratified analysis revealed the superiority of SVR in two distinct optical water types (OWTs) across BPL. SVR also showed robust performance relative to different atmospheric correction (AC) processors (iCOR and ACOLITE) and radiometric products (Rayleigh-corrected reflectance, and top-of-atmosphere reflectance). Chla maps for BPL using different combinations of Chla retrieval models and AC processors showed minimal noise and best reconstructions of Chla profile for a coupled SVR-iCOR processing. In addition, this coupled-processing method satisfactorily retrieved time series of Chla measurements, particularly for Chla values > 100 mg m-3, unlike other approaches. Superior transferability of SVR models among the two different OWTs in BPL suggests that these models may perform well in other prairie eutrophic lakes. In the absence of accurate atmospheric corrections, such locally trained machine-learning models (SVR, LMDN) may provide more reliable Chla estimations in small waterbodies, particularly when used to monitor harmful algal bloom events.
TechWorks Marine has recently completed the European Space Agency funded project ‘Coast Ocean Assessment using Earth Observation (CoastEO)’, focused on the development of a low cost buoy platform (MiniBuoy) and customised Earth Observation (EO) processing chains to provide in-situ validated EO products for environmental water quality monitoring to a broad range of coastal users. Funded as part of ESA's InCubed programme, CoastEO provides a seamless coastal water data collection and information platform which facilitates easy access to in-situ water measurements and associated validated EO products.
A small and lightweight buoy capable of supporting up to three reliable low-cost scientific instruments was developed (MiniBuoy). The ergonomic design enables the buoy to be deployed by one or two people from a small work boat or RIB, resulting in significant cost savings. The company's custom, web-based data analytics portal (CoastEye) which provides access to temporal and spatial data from a variety of sources was optimised to ingest and display Minibuoy data. A communications protocol between the MiniBuoy and the CoastEye platform was developed. Customised EO processing chains were developed to derive turbidity from water leaving reflectances for Sentinel-2 and Sentinel-3 imagery.
In-situ turbidity measurements were taken by deployment of the MiniBuoy platform in the waters of Dublin Bay between May and July 2021. Custom scripts were developed to track satellite overpasses and aid in deployment decision-making. Turbidity measurements from four larger buoys deployed in Dublin Bay from 2017 to 2021 were used to augment the MiniBuoy data and act as a baseline for comparison between buoy types. In-situ turbidity measurements were compared with satellite-derived turbidity measurements from Sentinel-2 and Sentinel-3 under clear sky conditions.
Satellite-derived turbidity data show good correlation (R = 0.9401) with coincident in-situ turbidity measurements from all buoys (four large buoys and MiniBuoy). Omitting the MiniBuoy, the four large buoys show a slightly weaker correlation coefficient of R = 0.9379. Omitting the four large buoys, the MiniBuoy has the highest correlation coefficient at R = 0.9529. Results suggest that a smaller buoy can potentially produce more accurate turbidity measurements when compared with larger buoys. This may be due to larger buoys disturbing the surrounding waters to a greater degree, which may affect measurements. Additionally the sensor on the MiniBuoy sits closer to the surface, giving a reading closer to that which the satellite measures. It was noted that satellite-derived measurements of turbidity are consistently underestimated when compared with in-situ measurements, indicating that more work could be done to fine-tune the atmospheric correction algorithm.
This work was carried out under ESA contract 4000129071/19/I-NS
An relatively easy to measure optical property of aquatic environments is the light absorption coefficient of all material in water, while information about the contribution of specific components (e.g., phytoplankton and inorganic suspended matter) is critical in understanding biogeochemical and ecological aspects of the water body.
Decomposing this measured non-water absorption, a_{nw}(\lambda), spectra into contributions by phytoplankton, a_{ph}(\lambda), non-algal particles, a_d(\lambda), and colored dissolved organic matter, a_g(\lambda), components is important for a better estimate of phytoplankton biomass in terms of Chl concentration, relevant in the context of dissolved and particulate organic carbon, and crucial for possible differentiation of diverse phytoplankton groups.
The stacked-constraints approach, proposed by Zheng et al. (2013 and 2015), relies on weakly restrictive assumptions that the absorption spectra parameters, for example, band ratios of a_{ph}(\lambda), must be contained within the predefined inequality constraints, which relaxes the common assumptions in previous studies of using a single exponential fit or certain a_{ph}(\lambda) shapes. However, its primary issue results from that the magnitude and spectral shape of components are highly variable within the world’s oceans, which makes constraints of the model inapplicable to other regions. In this presentation, we introduce optical water type classification to improve the partitioning model (referred to as GSCMf) based on Zheng et al. (2015). Optical classification is introduced in GSCMf to 1) parameterize the inequality constraints for each a_{nw}(\lambda) type and 2) improve data availability by reconstructing suspicious a_{nw}(\lambda) using memberships and centroids.
Based on the global in situ ocean color data set NOMAD, GSCMf shows encouraging estimates of a_{ph}(\lambda) over the full wavelength region when compared to previous models. For example, the median relative percentage difference for a_{ph}(443) derived by GSCMf (0.03%) is better than those by QAA (-2.86%) and Zheng et al. (2015) (5.67%). Based on the a_{nw}(\lambda) product in OC-CCI data, the performance over satellite images is evaluated over several regions, like German Bight, Chesapeake Bay, and the Sargasso Sea.
Several studies have demonstrated that it is possible to retrieve phytoplankton biogeochemical and size classes from satellite imagery. However, these results were obtained in clear oceanic waters. Many coastal and inland waters are optically complex and retrieving the phytoplankton functional types in such waters is extremely difficult or even impossible. For example, the whole Baltic Sea belongs to such complex waters. It has been demonstrated in the case of the Baltic Sea that satellite data allows to recognize at least cyanobacteria-dominated waters from other situations although the cyanobacterial biomass has to be relatively high and it is preferable that the sensors used are hyperspectral (e.g. Hyperion). There has been a nearly two-decade paus in the availability of hyperspectral satellite data since the launch of Hyperion. However, the launch of Prisma and planned missions like PACE, GLIMR, SBG and CHIME stress the need in spectroscopy of coastal and inland waters. Moreover, spectral configuration of OLCI bands and high spatial resolution of Sentinel-2 may also provide some chance to recognise certain phytoplankton groups based on their optical signatures.
We have created a Calibration and Algorithm Development Database (CADD) for the Baltic Sea using ESA and other funding. The CADD contains data from more than 200 sampling stations. For each station we have remote sensing reflectance, water leaving reflectance (i.e. glint-free spectrum), IOPs (hyperspectral absorption and attenuation, light backscattering at six wavelengths, volume scattering function at three angles and three wavelengths), vertical profiles of CDOM, Chl-a and phycocyanin measured with fluorometers, as well as CTD data. Water samples from each station were analysed for Chl-a, CDOM, TSM, SPIM, SPOM. In many cases we measured also DOC as well as phytoplankton and non-algal-particles absorption spectra. Most of the samples have also information about phytoplankton species composition collected with microscopy and additional information about the number, size, fluorescence and imagery of particles measured with a flow cytometer.
This database allows us to study whether the phytoplankton community structure or at least the dominating algal group can be estimated from remote sensing data. However, the more data we collect the more sophisticated the picture gets. About 15 years ago we were convinced that spectral signatures of cyanobacteria and other phytoplankton are sufficiently different to allow separation of these two groups, at least in the case of high biomass. Now we have seen that reflectance of diatoms may be identical to cyanobacteria. This should not be a big problem for remote sensing as diatoms dominate in the spring bloom and cyanobacteria in the summer bloom. However, there are species of cyanobacteria that can bloom in cold water just after the ice melts i.e. in the spring bloom period. In summer there are also many cases where there is dense cyanobacterial bloom in water, but no spectral features typical to cyanobacteria are visible in reflectance spectra. Moreover, we have observed that increase in cyanobacterial biomass may actually increase water transparency. The reason is formation of colonies. In the early stages of blooms single cells dominate in water making it murky. In the later stages biomass increases, but most of cyanobacteria are in colonies (filaments or balls). Water becomes clear in between of the colonies and water transparency may increase from 2-3 m to 6-8 m. Thus, besides the fact that high concentrations of CDOM and TSM may hide the spectral features potentially allowing to recognise certain phytoplankton groups there is also a no linear relationship between the biomass and it’s optical signal. Consequently, we are still in relatively early stages in determining which phytoplankton groups can be recognised based on their optical signatures, what are the extra conditions to do this (e.g. season, minimum needed biomass, etc.) and what kind of sensors satisfy the minimum need to detect the required spectral features.
Using a global in-situ database, we retune a three-component model that estimates the chlorophyll concentration of three phytoplankton groups, partitioned according to size (pico, nano- and micro-phytoplankton). We observe striking patterns between the parameters of the model and that of sea surface temperature, in support of earlier regional studies. The patterns were found to be consistent between two independent in-situ techniques for estimating phytoplankton size structure. We use the method to investigate changes in model parameters over the satellite thermal record, in view of how the marine ecosystem is responding to climate change. We also review the approach in the context of developing robust algorithms for the detection of climate trends in phytoplankton biomass over the satellite ocean colour record.
Remote sensing of ocean color has changed our vision of the distribution of phytoplankton and ocean carbon for the past forty years. These space-borne observations provide synoptic view of the concentrations of radiometric, bio-optical and biogeochemical parameters, continuously for the past twenty-+ years at high spatial (hundreds to thousand meters) and temporal (~2 days) resolutions. However, these observations are limited to clear-sky, day-light, over clouds, high Sun elevation angles and are exponentially weighted toward the ocean surface. Furthermore, they require a processing step to remove the contribution of the atmosphere and the air-sea interface.
Active remote sensing can overcome these limitations of passive space-borne ocean color observations. One of these techniques is lidar (Light Detection and Ranging). As an active remote sensing technique, it can overcome some of the above-mentioned limitations of passive observations. Despite several cases that demonstrated oceanic applications of ship-, air- and space-borne lidars, this tool has not received significant attention from the ocean color remote sensing community. Three space-borne lidars (CALIOP, ATLAS and ALADIN) are currently in space and several studies showed their interest in retrieval key oceanic optical parameters, i.e. the particulate back-scattering bbp and the diffuse attenuation Kd coefficients. CALIOP and ATLAS are green lidars (532 nm) while ALADIN is a UV lidar (355nm) and using the High Spectral Resolution technique. While ocean processing algorithms were developed for CALIOP and ATLAS, there is no ocean products for ALADIN as it aims at estimating the atmospheric wind. Few validations have been made for the oceanic products from CALIOP and ATLAS.
During the CADDIWA campaign in Cabo Verde in September 2022, in-situ optical (remote sensing reflectance, profile of the diffuse attenuation coefficient), bio-optical (absorption and back-scattering coefficients) and biogeochemical (chlorophyll-a concentration and particulate organic carbon at different depths) parameters were collected during sixth one-day at sea. These in-situ measurements were coordinated with the overpasses of the CALIOP (one day), ATLAS (one day) and ALADIN (four days) and the French airborne lidar LNG (five days). Validation of space-borne lidar oceanic products will be shown. Moreover, capabilities of the LNG to study the ocean will also be investigated and presented. This is the first study aiming at evaluating the capability of LNG for studying the ocean color parameters (as it was developed specifically for aerosol studies).
In addition to their ecological and environmental value, coastal areas are of major economic and social importance. As a result, they are one of the marine environments most impacted by anthropogenic pressures, such as high population density and intensive human activity. Anthropogenic pressures produce an excess of nutrients which are delivered to coastal waters, triggering phytoplanktonic growth and eutrophication, reducing the water quality. Several policies have been enacted in Europe with the aim of restoring and protecting waters, such as the Water Framework Directive (WFD; 2000/60/EC) and the Marine Strategy Framework Directive (MSFD; 2000/60/EC). Traditionally, water quality assessments were based on in situ chlorophyll-a (Chl-a) concentration data. The ocean colour satellites (such as Sentinel-3) produced water quality maps with higher spatial and temporal resolution (300 m) than those derived from in situ data. As a result, they allowed to improve the required assessments of the above Directives. However, this information is only suitable for open waters and not for coastal ones, which require even higher spatial resolution due to their greater variability. Nowadays, the new generation of satellites, such as Sentinel-2 from the Copernicus Programme and its MSI sensor, produce ocean colour maps with very high spatial resolution (10-60 m), allowing for accurate assessment of water quality of coastal waters (Caballero, 2020; Poddar, 2019). This is especially true in the Mediterranean Sea, where the low tidal range (20-40 cm) makes difficult to dilute continental inflows. As a result, eutrophication is one of the greatest threats to Mediterranean coastal waters.
The goal of the MARS project is to create Chl-a concentration maps for coastal waters, particularly the Mediterranean Sea, using satellite data.
To achieve this goal, the Catalan coast was chosen as a case study for two reasons: i) this coast is representative of the NW Mediterranean and ii) the National Catalan Coastal Water Monitoring Program is open to the public and includes in situ data of Chl-a concentrations dating back to 1990 (N data > 250.000). The information of this time series database was collected from 268 stations, sampled quarterly or monthly and located along 400 k of coast at different distances from the shore and at surface. Previous studies based on this database had defined three coastal areas (Flo, 2011): Coastal Inshore Waters (CIW; 0–200 m from the shore), Coastal Nearshore Waters (CNW; 200-1500 m from the shore) and Coastal Offshore Waters (COW; > 1500 m from the shore). These areas are different from one another when it comes to the mean values of the measured oceanographic parameters and their variability, as well as seasonal dynamics. CIW show higher values and variability than outermost coastal waters and they do not follow the natural pattern described for surface open Mediterranean waters. The underlying reason of CIW’s singularity is their proximity to the continent. These waters directly receive nutrient-rich freshwater inflows from land, which vary in their quantity and nature along the coast and throughout the year. These continental influences trigger primary production and as a result CIW have a higher mean Chl-a concentration (2.42 μg/L) than CNW (0.77 μg/L) or COW (0.37 μg/L) and do not experience nutrient scarcity during summer compared to outermost waters. Therefore, in the Mediterranean Sea the coastal waters that are at greater risk of eutrophication are the CIW.
The main issues that need to be addressed within this project are related to the inversion algorithm to derive Chl-a, which needs to be adjusted for the region of the case study. It must account for a transition from clear to turbid waters, where phytoplanktonic cells may mix with CDOM or other detritus in the water column. In addition, the algorithm should also be corrected for sea bottom interferences, particularly seagrasses and macroalgae. Because all these parameters produce reflectances that overlap with those of Chl-a concentration, erroneous retrievals are possible.
Accordingly, the project was divided into four work packages (WP). WP1) Establishment of a standard procedure for the acquisition and processing of satellite data (Sentinel-2 from 2015 to 2018), including the testing of some already existing algorithms for atmospheric (ACOLITE, Sen2Cor) and sunglint (ACOLITE, Hedley) corrections as well as for Chl-a concentrations (OC2, OC3, S2-MCI, NDCI, Gons, C2RCC, MOSES, MISHRA, POLYMER). WP2) Comparison of in situ and satellite Chl-a data to identify spatial areas of discrepancy caused by satellite concentration retrieval errors. Then, identification of which parameters cause the errors. WP3) Acquisition of collocated and simultaneous in situ and satellite data of Chl-a from previously identified areas. Next, training and modification of the standard processing algorithm with the new in situ data. Finally, developing a new tailored algorithm suitable for coastal waters. WP4) Automation of the tailored procedure and creation of a map viewer to freely distribute the information and the verified Chl-a concentration maps to the general public.
The initial results are promising, considering that only WP 1 and 2 have started. First, the very high spatial resolution of Sentinel-2 is absolutely adequate for the retrieval of Chl-a concentration from CIW. Second, existing algorithms for atmospheric and sunglint corrections are a suitable option for the establishment of the standard procedure. Third, the comparison of in situ Chl-a data with the results of the already existing algorithm for its retrieval is resulting very useful in determining the best algorithm and identifying some of the parameters that hinder its retrieval.
Once all WP will have been completed, the project will produce: i) the operational generation of Chl-a concentration maps from satellite data suitable for the Catalan coast and publicly available; ii) the list of parameters that cause errors on retrieved Chl-a concentrations from satellite data; iii) a response on whether satellite data can distinguish CIW from outermost coastal waters. These findings will be used to i) extend the tailored algorithm to provide full coverage of the entire Mediterranean Sea; ii) increase the scientific knowledge of the structure and functioning of the Catalan coast; and iii) improve eutrophication assessment.
The MARS project will provide benefits for all its stakeholders, including scientific groups, governmental administrations and private enterprises related to the maritime sector. For example, its findings will be useful for fulfilling the requirements of the WFD, in relation with the Biological Quality Element Phytoplankton, and of the MSFD, in relation with the Descriptor 5 - Eutrophication. Therefore, these results will be very valuable for the involved administrations in the implementation of these Directives, especially for the Catalan Water Agency, the Spanish Ministry of Ecological Transition and the European Community. Moreover, such findings will allow for the formulation of recommendations for the sustainable management of marine water, in accordance with the Ecosystem Approach, as requested by the same Directives.
The MARS project has been founded by the ‘Severo Ochoa Centre of Excellence’ accreditation (CEX2019-000928-S).
We introduce A4O, a novel method for atmospheric correction (AC) of Sentinel-3 OLCI ocean colour imagery suitable for diverse optical water types (OWT) that represent inland, coastal and ocean waters. The method is an extensive revision of the C2RCC processor (the Case-2 Regional algorithm from the CoastColour project by Doerffer et al.), that is implemented in the Sentinel Application Platform (SNAP) and which has been selected by ESA as atmospheric correction branch for optically complex waters in the Sentinel-3 OLCI ground segment processor. The core of A4O is an ensemble of several neural networks (NN) that approximate fully-normalized remote-sensing reflectance Rrs (θs = 0°, θv = 0°) at 16 OLCI bands from the top-of-standard-atmosphere reflectance spectrum Rtosa. The fundamental dataset for NN training has been expanded and more focused on optical diversity of natural waters. Special emphasis is placed on high classifiability of Rrs with the OLCI Neural Network Swarm (ONNS) water algorithm, which utilizes an OWT framework for selecting and blending retrievals from diverse NNs, i.e. the spectral shape of Rrs is essential. Further advancements over C2RCC include for instance the consideration of wind-depending whitecaps at the sea surface and the inclusion of climatological data. Previous atmospheric correction methods do not fulfil all requirements for unlimited usability of ONNS or other OWT-based ocean colour algorithms. This is on the one hand because not all generic Rrs shapes, that are representative for an OWT, are provided by the AC, on the other because the classification is inconclusive, i.e. the total memberships are too low. In comparison with other ACs – like IPF, C2RCC, POLYMER and ACOLITE – the novel method shows significant advantages:
1) the scope of applicability of A4O includes all OWTs that are considered in ONNS,
2) A4O delivers optically plausible Rrs shapes even in cases with intense algae blooms, extremely scattering (bright) waters, and cases with significant absorption by coloured dissolved organic matter (CDOM) (like in the Baltic Sea or inland waters),
3) in view of ONNS applicability, A4O delivers reflectances that are always well OWT-classifiable,
4) an improved cloud flagging is provided, and
5) spatial noise is reduced (in particular in the region of the South Atlantic Anomaly).
However, known weaknesses (of other AC methods too) persist in context with intense sun glint and undetected clouds and their shadows, but also in cases with extremely low marine signal (very high CDOM content) and relatively large optical thickness of the atmosphere. The general performance of A4O is illustrated by means of optically diverse satellite images. Validation is performed by comparison of spectral in-situ data from the Aeronet-OC network, other publicly available quality controlled in-situ reflectance measurements, as well as by contrasting with measurements from own campaigns and the OC-CCI in-situ database.
Information about benthic communities alongside with bathymetry is essential since many benthic communities and ecosystems of coastal zones, estuaries and inland water bodies have both commercial and ecological value, which makes these regions valuable in terms of biodiversity and marine resources. Therefore, it is necessary to carefully plan activities, which could affect the state of coastal waters and continuously monitor the conditions. However, as in situ sampling is costly and time-consuming, areal estimates of macroalgal species cover are often based only on a limited number of samples. This low sampling effort likely yields very biased estimates, as macroalgal communities are often characterized by large spatial variability at multiple spatial scales. Remote sensing methods significantly complement contact measurements and give additional information about the hard-to-reach areas. Optical satellite data can be an efficient alternative for bathymetric derivation in shallow coastal waters, providing temporal and spatial continuity. Bathymetry and benthic habitat mapping are relatively well advanced in clear ocean environment (e.g. in coral reefs) and with high spatial and spectral resolution sensors. However, the high resolution mapping efforts are infrequent and cover small areas. Moreover, their application in optically complex coastal and inland waters needs further testing in order to understand the limits of remote sensing. The launch of Sentinel-2 opened new possibilities in bathymetry and habitat mapping as free medium resolution imagery is now available globally and with a few days interval. Our aim was to test the suitability of Sentinel-2 data for mapping bathymetry and benthic habitat in coastal and inland waters with different optical complexity.
Two different study sites were selected to cover range of optical variability. The first test site was Lake Garda in Italy and the second test site was Viimsi peninsula in Estonian side of the Gulf of Finland, the Baltic Sea. Lake Garda is a subalpine lake located in Northern Italy. Its surface area is 368 km2, volume 49 km3 and mean depth 133.3 m (max 350 m). Lake Garda is an important resource for recreation and tourism and an essential water supply for drinking, agriculture, industry and fishing for the region. The Estonian test site was in the Gulf of Finland area close to Tallinn. Viimsi peninsula and Aegna Island surroundings are in great anthropogenic stress where on one side of the peninsula is l the Port of Tallinn and other side port of Muuga. There is also very high frequency ship traffic in the area as ferries travelling between Tallinn and Helsinki pass there tens of times per day. Peninsula itself is under heavy construction due to fact that it is fast-growing area for housing and industry.
In Lake Garda test site Joint CNR-IREA and EMI field campaign was carried out in June 6-8, 2017 using the facilities of CNR-IREA Experimental Station “Eugenio Zilioli” in Sirmione. Both water column parameters and benthic habitats were characterized in optically shallow waters while only water column properties were measured in the deep stations. Reflectance of the water was measured with Ramses (TriOS) spectrophotometers, optical water properties were measured with WetLabs instrument set, which consist of a hyperspectral absorption and attenuation meter AC-S, backscattering sensor ECO-BB3 that measures backscattering coeffiecient at three wavelengths and a volume scattering sensor ECO-VSF3 measuring scattering at three wavelengths and three angles. The WetLabs instrument package included also a CTD for temperature, salinity and depth measurements. The frame was slowly lowered through the water column and instruments were measuring continuously. Water samples were collected from the surface layer (between the surface and 0.5 m depth) and taken for determining concentrations of chlorophyll-a, CDOM and suspended matter (total, SPIM, SPOM). Total and CDOM absorption coefficients were measured in laboratory using an a-sphere (HobiLabs) integrating cavity absorption meter. The total number of bio-optical sampling stations was 16 while bottom was mapped with drop video in 22 stations. The videos were later analyzed in laboratory to estimate species composition and the percentage of cover. Specimen of macrophytes and macroalgae were taken to the boat where reflectance spectra were measured with Ramses spectrometer and photos were taken to help video interpreters who are not familiar with the Lake Garda flora. Fieldwork was planned during a Sentinel-2A overpass. However, there were thunderstorms in the Lake Garda area during the Sentinel-2 data acquisition. Therefore, images from June 26 and July 8 2017 were used.
Fieldwork in Viimsi test area was carried out in several stages: “deep” water sampling with the WetLabs instrumentation package and water sampling was carried out on September 2, 2017 (four stations), benthic habitat mapping with drop video was carried out on September 15 (35 samples) and very shallow water depth and benthic habitat registration was carried out on September 13 by walking in water (26 sampling points). This was caused by difficult weather conditions. The best Sentinel-2 images closest to the in situ sampling dates were available from June 4 and July 7, 2017 and they were Sentinel-2A images. There was also plan to carry out an airborne campaign where a hyperspectral image of the study area should have been collected with EMI hyperspectral airborne imaging spectrometer HySpex. Unfortunately, there was no flight weather in August-September. Thus, the airborne campaign had to be cancelled.
Image Data Analysis (IDA by Numerical Optics, https://www.numopt.com/) software package was used for image processing and visualization. IDA is a software package that allows to perform several image pre-processing steps (atmospheric correction, glint removal) and allows to retrieve water depth, benthic habitats and optical water properties using the adaptive lookup table (ALUT) approach.
Altogether, 53 in situ measured depth points from a depth range 0 to 7 meters were collected from the two study areas – 12 from Lake Garda and 41 from Viimsi study site respectively. The chosen study methodology enabled us to produce reliable bathymetry maps with IDA in optically complex Baltic Sea and a subalpine lake. In both test sites the coefficient of determination (R2) is over 0.95 which shows excellent concurrence between image derived water depth and measured water depth.
Altogether, 42 in situ points with bottom habitat data were collected from the two study areas – 10 from Lake Garda and 32 from Viimsi study site respectively. In situ data was classified into 3 main classes – sand, areas dominated by green algae cover, areas covered by brown algae. It has to be kept in mind that the “bare” substrate, in this case sand, is actually not clean substrate. First of all, every object in sweater is always overgrown with some marine organisms. Even single sand particle is always covered with microscopic algae. Moreover, the in situ data was classified as sand if vegetation cover was less than 40%. On the other hand, 30% vegetation cover on substrate may spectrally look like vegetation not as bare substrate. Overall accuracy in Viimsi test site was 90% from 23.06.2017 image and 80% from 08.07.2017 image. In Lake Garda overall accuracy was 76.92% from 04.06.2017 image and 73.08 from 07.07.2017 image.
Our previous studies show that it is very difficult or nearly impossible to separate green macroalgae, seagrasses and other higher order plants from each other based on their optical signatures. Especially when multispectral sensors are used. In Garda test sites there were 2 and in Viimsi test site there were 3 points where higher order vegetation (HOV) was found, but user accuracy was 0% from both Lake Garda images and from 04.06.2017 Sentinel-2 image and 25% from 07.07.2017 Viimsi image and in every case HOV was classified wrong, it was classified as green algae, so it was decided to combine HOV and green algae classes. In lake Garda, no brown algae were detected during the in situ campaign nor it was presented in classification result, so this class has been removed from the classifying accuracy assessment.
Sentinel-2 data quality and availability has increased the opportunities to monitor hard to reach coastal areas that have both ecological and commercial value. Bathymetry mapping in waters less than 4 m in the Baltic Sea and less than 3 m in Lake Garda gave accurate results with R2 being above 0.95 in all four Sentinel-2 images from where water depth was estimated. Bottom type mapping accuracy were in all cases over 73%, which is considered good, but in both test sites it is worth further studies due to the limited number of sampling points.
Satellite-Derived Bathymetry (SDB) has significant potential to enhance our knowledge of Earth’s coastal regions. However, SDB still has limitations when applied to the turbid, but optically shallow, nearshore regions that encompass large areas of the world’s coastal zone. Environments with transient turbidity are more problematic because turbidity severely biases depths, when it does not completely obscure the bottom, thus constraining SDB for its routine application. A multi-temporal method with the Sentinel-2 satellites has been developed eliminating manual screening and reducing turbidity and noise effects (whitecaps, ships, cloud shadows, etc.), while applying a commonly used algorithm. The methodology incorporates a robust atmospheric correction, a multi-scene compositing method, a switching model to improve mapping in shallow water, and a masking of optically-deep waters. No manual intervention or scene selection (beyond defining a time range for imagery) is required to go from the individual input scenes to the composited scene. Currently calibration requires selection of only 10-15 reference points obtained from available nautical charts. Several study sites along in USA are explored due to their varying water transparency conditions. The approach allows for the semi-automated creation of bathymetric maps at 10 m spatial resolution, yielding accurate and consistent SDB with median errors < 1 m for depths 0-30 m when validated with lidar surveys, errors that favorably compare to uses of SDB in clear water. This framework substantially decreases time, labor, and the influence of transient turbidity, particularly when using data from a platform, like the Sentinel-2 twin mission, which provides routine and repetitive image acquisition. In addition, we use SDB, in comparison with high-resolution airborne lidar bathymetry (ALB), to quantify bathymetric changes at two inlets in North Carolina following the impacts of the devastating Hurricane Florence in September 2018. The multi-temporal SDB products and ALB both show similar results in the erosion/accretion patterns, with a median absolute error of ~0.5 m and bias of ±0.2 m; errors that are equivalent to those associated with the SDB estimated absolute depths. The Sentinel-2 constellation provides five-day revisit at the equator (more frequent at higher latitudes), allowing rapid construction of a bathymetric map as well as developing an archive for retrospective analysis. Thereby, by implementing this computationally efficient technique, SDB based on Sentinel-2 may substantially expand and enhance conventional survey methods for change detection and support operational and recursive coastal monitoring on local to regional scales.
The true-color image obtained from optical remote sensing has missing data because of weather conditions, sun glint, and so on. Missing data have difficulties in recognizing various phenomena in the ocean. Therefore, a composite image created by combining satellite images is used. It is mean image by several days, weeks, months, or years. The mean image may not show the temporal variations. Therefore, interpolation is used to fill the missing data that cannot be filled in the composite image. There are several methods of interpolation, based on time series data, spatial data, and spatiotemporal data. Reconstructing the missing values based on time series data is using the point values observed at different times, for example, nearest neighbor, linear, and Cubic Spline. Reconstructing the missing values based on spatial data interpolates the observed data around the missing data, such as IDW (Inverse distance weighted), spline, kriging. Reconstructing the missing values based on spatiotemporal data is DINEOF(Data Interpolating Empirical Orthogonal Function) based on EOF(Empirical Orthogonal Function).
Phytoplankton is the basis of the food chain in the ocean and is an important factor in the growth of zooplankton and fish. The ocean color is used to observe phytoplankton through remote sensing. Chlorophyll-a mainly contributes to photosynthesis in phytoplankton. Estimated the pattern of chlorophyll-a is able to identify the distribution of phytoplankton, and ocean primary productivity. GOCI(Geostationary Ocean Color Imager) images are able to obtain 8 times a day because of geostationary satellites. Therefore, the temporal change of chlorophyll-a is suitable to use GOCI images. In this study, the missing data of chlorophyll-a obtained GOCI image was reconstructed by using DINEOF.
Chlorophyll-a data processed GOCI images from KIOST(Korea Institute of Ocean Science and Technology) were used. The study period is from August 1 to August 31, 2013. Also, in-situ data from August 6 to 13 was used to compare the reconstructed data obtained by DINEOF. In-situ data are obtained by highly concentrated chlorophyll-a area, mid-concentrated chlorophyll-a area, clearwater area. The study area is the East Sea in Korea where is a deep semi-closed ocean. There is a nutrition area because the Taiwan warm current and the North Korea cold current meet.
DINEOF filled the missing data through EOF repeating. To decide the optimal number of EOF modes, cross-validation is calculated how much the difference of chlorophyll-a data and the reconstructed data. It is randomly extracted 5 percent of chlorophyll-a data. MAE(Mean Absolute Error), RMSE(Root Mean Square Error), CC(Coefficient Correlation) are calculated for the cross-validation.
In-situ data from 6 to 13 August is compared to the chlorophyll-a data and the reconstruction data. The temporal and spatial reconstructed data showed the vertical distribution of chlorophyll-a. Also, the reconstructed data smoothly filtered the noise of outliers.
In this study, missing data from GOCI images were filled by using DINEOF. The accuracy of the algorithm was calculated by cross-validation and compared with the in-situ data and chlorophyll-a images. The spatiotemporal reconstructed data identified chlorophyll-a concentration variations. It is useful to use optical satellite images and is expected to use as basic data for monitoring the ocean environment.
Geostationary Ocean Color Imager (GOCI), was serviced from April 2011 to March 2021, and GOCI-II, in service since April 2021, produced huge outcomes in Korea in terms of data support for practical applications and scientific research. We are investigating eight application modules for practical use to maritime issues using various ocean monitoring satellite data including GOCI to provide its products applicable in maritime information service organizations. The practical techniques we have chosen are to detect floating algae, marine fog, harmful algal blooms, fine aerosol particles, low sea surface salinity water, to detect and forecast abnormal sea surface temperature, to derive ocean water quality parameters and primary productivity. Several practical candidate techniques are included. The application development process includes three phases like the performance targeting phase, development phase, verification phase. Target performance for each technique is decided based on a user requirement survey and expert advice. We consider accuracy, target region, temporal-spatial resolutions to cover specific issues from the point of view of the user. We will improve the detection(prediction) accuracy over the performance of the GOCI output. Each practical application technique is established as an independent processing module. We prepare the algorithm verification procedure document to test the processing module and verify the result with verification data to meet the performance goal. A separate verification will be performed to increase the reliability of the module. We develop a prototype module and compare the results with high-resolution satellite data or dedicated in-situ data. In the case of floating algae, GOCI-based results and OLI results are compared. While the targets of the hit rate and the false alarm rate are 80% and 20%, the result of the verification shows the hit rate is 84.82% and the false alarm rate is 0.1%. Comparisons will be made on techniques that have advanced a lot such as marine fog, fine aerosol particles. Also, we are developing an integrated satellite information service system that integrates eight processing modules and includes functions to display and analyze the satellite-derived products. We expect technology transfers to users for reliable and mature applications. These will contribute to producing societal benefits by adding multi-satellite-based information to the decision-making process.
In general, it is the aim to keep the uncertainties of Chl-a retrieval from satellite within 10-70%, dependent on water type. For routine in situ monitoring one generally aims at 20 % error, while for state of the art laboratories using High Performance Liquid Chromatography (HPLC) the errors are expected to be much lower (about 2%).
Due to high CDOM absorption and relatively low particle scatter, remote sensing of the Baltic Sea is considered rather challenging. The accuracy of chlorophyll-a (Chl-a) retrievals from satellite data above optically-complex waters in the coastal zone were substantially improved by the ESA MERIS Envisat (2002-2012) as well as the ESA OLCI Sentinel-3 mission (since 2016). We could demonstrate in our research that Chl-a products derived from MERIS & within +/-30% root mean square difference (dependent on the processor used); Kratzer and Vinterhav, 2010; Beltrán-Abaunza et al., 2014; Kyryliuk and Kratzer, 2019, Kratzer and Plowey, 2020).
In Sweden, monitoring data is increasingly used for the validation of Sentinel-3 data (e.g. SMHI’s demonstration project Swedish Space Data Lab). With the satellite data becoming increasingly accurate, it is also important to evaluate the uncertainties in the in situ validation data.
Historical data from several Chl-a intercomparisons showed that, overall, there can be substantial differences in the Chl-a measurements performed by the different Swedish monitoring groups (about 25-60%). There are several possible causes for these differences. For example, the laboratories use various analytical methods (i.e. spectrophotometric vs. fluorometric) as well as extraction and storage procedures. The latter have a key impact on the measurement uncertainties (Sørensen et al., 2007). Furthermore, the results also depend on the range of chlorophyll concentrations.
We arranged a dedicated Chl-a intercomparison during 1-2 July 2021 within the Swedish monitoring program, including the Marine Remote Sensing Group (MRSG) from Stockholm University, the monitoring groups at Stockholm, Gothenburg and Umeå Universities, SMHI as well as NIVA (Norway), IOW (Germany) and JRC (Italy). We performed a dedicated transect through Bråviken bay and sampled 8 stations along a gradient from the outer to the inner bay (see uploaded png file; note that station GB16 was samples twice). We sampled and filtered 3 surface samples (0.5 m depth) per group and per station using Niskin Bottles placed on a rosette sampler. The filters were flush frozen in liquid nitrogen directly after filtration and distributed to all groups in dry ice at the beginning of September 2021. At the same time we prepared and distributed Chlorophyll-a standards (Sigma Aldrich) in dry ice to be measured by all groups.
In my presentation I will show results from these intercomparisons and discuss what are the main factors that may lead to differences between groups, and how we can correct for these, so that both current as well as historical monitoring data is consistent and helps us to understand real environmental trends in the Baltic Sea, e.g. as a response to climate change. I will also compare the uncertainties within in situ measurements with the uncertainties of Chl-a derived from both MERIS and OLCI data (Beltrán-Abaunza et al., 2014 and Kyryliuk and Kratzer, 2019).
References
Beltrán-Abaunza, J.M., Kratzer, S. and Brockmann, C., 2014. Evaluation of MERIS products from Baltic Sea coastal waters rich in CDOM, Ocean Science, 10, 377-396.
Kratzer, S. and Vinterhav, C., 2010, Improvement of MERIS data in Baltic Sea coastal areas by applying the Improved Contrast between Ocean and Land processor (ICOL), Oceanologia, 52(2), 211-236.
Kratzer, S. and Plowey, M. 2021, Integrating mooring and ship-based data for improved validation of OLCI chlorophyll-a products in the Baltic Sea. International Journal of Applied Earth Observation and Geoinformation, 94, 102212.
Kyryliuk, D. and Kratzer, S., 2019. Evaluation of Sentinel-3A OLCI Products Derived Using the Case-2 Regional CoastColour Processor over the Baltic Sea. Sensors, 19(16), 3609.
Sørensen, K., Grung, M. and Röttgers, R., 2007. An intercomparison of in vitro chlorophyll a determinations for MERIS level 2 data validation. International Journal of Remote Sensing, 28(3-4), pp.537-554.
Jeffrey, S.W., R.F.C., Mantoura, and S.W. Wright (Eds), 2005, Phytoplankton Pigments in Oceanography. Monographs on oceanographic methodology, UNESCO Publishing, 1997, 667p, ISBN 92-3-103275-5.
Phytoplankton play an important role in the aquatic biogeochemical cycling such as for the formation of organic matter by photosynthetic processes through the fixation of carbon dioxide, and assimilation of macro- and micronutrients depending on their metabolic needs. These processes are common to all phytoplankton; however, some phytoplankton groups have specific needs and thus play different functional roles in the biogeochemical cycle. Information on the phytoplankton groups (PFTs) can be obtained from satellite observations such as the Ocean and Land Colour Instrument (OLCI) on board of Sentinel-3 as well as the TROPOspheric Monitoring Instrument (TROPOMI) on board the Copernicus Sentinel-5 Precursor satellite. PFTs global ocean abundance from multispectral satellites can be estimated based on the OC-PFT algorithm which is based on the assumption that a marker pigment for a specific PFT varies in dependence to the chlorophyll-a concentration. While PFTs from hyperspectral satellite measurements, as from TROPOMI, can be estimated by Differential Optical Absorption Spectroscopy (DOAS) method. In this study, chlorophyll-a concentration for three main phytoplankton functional types (diatoms, coccolithophores and cyanobacteria) are derived by combining retrievals from space-borne measurements at a high spatial resolution by the empirical OC-PFT algorithm applied to OLCI data with data retrieved from TROPOMI measurements with high spectral resolution by analytical method (Phyto-DOAS). A previous algorithm and data set based on OC-PFT retrievals applied to OC-CCI Chlorophyll-a product and Phyto-DOAS retrievals from SCIAMACHY data have shown the validity and high quality of the synergistic PFT product (Losa et al. 2017). Here, a first evaluation of the synergy of Sentinel-3 and Sentinel-5P PFT retrievals by combining OC-PFT and Phyto-DOAS is evaluated and compared to field measurements of PFT sampled during the RV Polarstern expedition PS113 in the Atlantic Ocean from May to June 2018. In addition, the adaptation of the method to enlarge the capabilities of PFT data in inland and coastal waters analytically retrieved from high spectral and high spatial data such as DESIS, EnMAP or PRISMA by synergistic use with OLCI OC-PFT data sets is discussed.
The quality of the remote sensing products of inland and sea waters directly depends on the accuracy of the water-leaving radiance retrieved from satellite measurements acquired at the top of atmosphere. This necessitates the use of atmospheric correction algorithms to correct for the atmospheric intrinsic radiance and the reflection of sun and sky light reflected by the wind ruffled water surface. Recently, an intercomparison exercise (ACIX-II) was organized by ESA and NASA to conduct an analysis of the mutual performances of the state-of-the-art atmospheric correction algorithms and delineate the potential ways of improvement. The recommendations of this study were (i) to better represent the aerosols and especially the continental and absorbing ones, (ii) to correct for the sky and sun reflection for high-resolution satellite pixels, (iii) to efficiently correct adjacency effects. Here, we present the recent developments of the Glint Removal for Sentinel-2 (GRS) algorithm to handle continental and absorbing aerosols in its atmospheric correction framework.
The sunglint signal is basically defined as the reflection of the direct sunlight onto the water surface and eventually reaching the sensor. For decametric spatial resolution acquisition, the stochasticity of the sunglint phenomenon precludes any parametrization based on ancillary data such as the wind speed. The GRS algorithm was developed to estimate the sunglint signal directly from Sentinel-2-like spectral information and viewing geometry. On the other hand, this algorithm also proceeds with an effective atmospheric correction based on aerosol parameters provided by the Copernicus Atmospheric Monitoring Service (CAMS). However, only oceanic and poorly absorbing aerosols were accounted for in the previous version of GRS. In this study, we describe the novel version of GRS which incorporates new aerosol models and a specific correction for absorbing aerosols. This correction is based on the separation of the scattering and absorbing properties of aerosols to perform the atmospheric correction and based on the first guess provided by the new aerosol products of CAMS (e.g., spectral aerosol optical thickness and single scattering albedo). Note that the refined consideration of the aerosols directly improved the sunglint correction and could be used in the future to better handle adjacency effects. This new version of GRS was tested in atmospherically polluted zones including matchup validation in several places including the Hong Kong area under high load of absorbing aerosols.
Ocean-colour and/or multi-spectral remote sensing observations can form an important tool in observing coastal, estuarine and inland waters. These aquatic systems are major carbon reservoirs; support diverse species in multiple habitats; provide a wide range of ecosystem services; and function as natural barriers for protection of coastal areas from extreme climate events, including floods, cyclones, and tsunamis. However, anthropogenic activities, such as the overexploitation of resources, have adversely affected many coastal aquatic ecosystems and the livelihood of people living in the surrounding areas. The United Nations (UN), through its Sustainable Development Goals (SDGs) 6 and 14, emphasise the importance of clean water and sanitation, and the sustainable use of aquatic resources. In this context, monitoring water quality becomes an important step in our efforts to understand the stresses on coastal and estuarine ecosystems that are associated with anthropogenic activities, and to move towards sustainable management of their resources. Here, we study the water quality of a tropical coastal, estuarine and lake system, the Vembanad-Kol-Wetland (VKW) system, situated on the southwest coast of India, using multi-spectral remote sensing observations. We present regionally tuned algorithms using a forward modelling technique for the satellite retrievals of two important water quality indicators: Chlorophyll-a (Chl-a) and Total Suspended Matter (TSM). The forward model parameters to simulate remote-sensing reflectances were tuned using samples (N=839) from the NOMAD dataset and in situ observations (N=228) from 15 field campaigns in the VKW system between March 2018 and May 2019. Results showed that the developed new models for the VKW system performed better than existing Chl-a and TSM algorithms. The forward model parameters were applied to Sentinel-2 MultiSpectral Imager (MSI) data using two atmospheric correction techniques (Acolite and Polymer) and the satellite-derived reflectances were fine-tuned using in situ observations. The satellite validation showed that the model results are within the error limit that satisfies the standard validation protocol for satellite-derived products. We illustrate that the developed models can be used for routine monitoring of water quality as well as algal blooms and sediment dynamics in the Vembanad-Kol-Wetland system.
Coastal aquatic remote sensing (RS) can help monitor the immensely valuable ecosystems of the global seascape, such as seagrasses and corals, by providing information on their extent, condition (e.g., water quality, bathymetry), ecosystem services (e.g., carbon sequestration, biodiversity maintenance), and trajectories. Unlike terrestrial RS, coastal aquatic RS applications require an additional consideration of the water column and its interactions with the light signal. This introduces new challenges as the water column attenuates light differently across the wavelengths, which has implications for signals from the benthic seabed where these subtidal ecosystems thrive. When the object(s) of interest is located on the benthic floor and not floating near the water surface, the additional depth increases the influence of the water column on light and affects the signals sensed by satellites at the top of the atmosphere. Besides these, other effects such as turbidity, waves, and sunglint introduce wide-ranging reflectance values as well.
While these challenges have been traditionally handled through often complex methods in local computing environments, contemporary advances in cloud computing and big satellite data analytics offer highly scalable and effective solutions within the same context. The parallel processing of cloud platforms like the Google Earth Engine allows multitemporal composition of thousands of satellite images in a defined area over a defined time range through highly efficient statistical aggregations. As such, this approach yields Analysis Ready Data which are less redundant and more time efficient than the conventional laborious manual search for suitable single satellite image(s) which is often a yearlong assessment over cloud-dense coastal regions like the tropics. Regardless of the method, the pre-processing of the image and/or image composite remains a critical component of a successful coastal ecosystem assessment using RS.
The impact of light attenuation changes the returning spectral signal, resulting in different signal profiles for the same seabed cover at different depths. In particular, at deeper depths, darker covers such as vegetated coastal beds (e.g., dense seagrass, microalgal mats) and optically deep water pixels are more likely to be confused and misclassified. A possible solution is to identify and remove these deep water pixels, where the water is too deep and thus no bottom signals are able to return to the sensor. By using a HSV-transformed B1-B2-B3 false-colour composite, namely the hue and saturation bands, of the Sentinel-2 image archive within the cloud computing platform of the Google Earth Engine, we are able to disentangle optically deep from optically shallow waters across four sites (Tanzania, the Bahamas, Caspian Sea (Kazakhstan) and Wadden Sea (Denmark and Germany)) with wide-ranging water qualities to improve the optically shallow benthic habitat classification. Furthermore, we compare our method with the three band ratios from a combination of the same three bands. While the band ratios may perform better in some sites, the specific band combination is site specific and thus might perform worse in others. In comparison, the hue and saturation bands show more consistent performance across all four sites.
By using simple statistical reduction, the multitemporal composite is able to automatically mitigate common coastal aquatic RS showstoppers like clouds, cloud shadows or other temporal phenomena. However, there is also a need to remove images with explicitly no useful information, so that it does not affect the statistical approach. The use of metadata properties in the image archive is therefore additionally needed to filter out “bad” images, reducing the unnecessary computational costs of processing these low quality images. Case in point, this is a recommended procedure to filter for lower cloud covers prior to multitemporal composition in Google Earth Engine. We extend this approach further by integrating the various solar and viewing angles to estimate the presence of sunglint, on the basis that the spectral reflectance angle of the scene is a major factor to sunglint presence in satellite images. Finally, we draw comparisons with less pre-processed composites, showcasing methodological benefits for national coastal ecosystem assessments in the Bahamas, Seychelles, and East Africa.
Climate changes have an impact on all global systems and precipitation patterns are one of the most severely affected components, both in time and spatial dimension. Recent studies have shown that the temporal distribution of rain and snow is becoming more and more skewed, with current half of the world's annual measured precipitation falling in just a 12 days interval. Thus, while only small variations in annual precipitation quantity are foreseen, these are accompanied by an increase of the number of extreme weather events. Concentrated rain, both in space and time, generally leads to a more intense land surface runoff. These perturbations have significant impacts on river flow and subsequently on solid outputs into the coastal areas and inside associated wetlands. Such extraordinary events were well captured by the Ocean and Land Colour Instrument (OLCI) onboard Sentinel-3 in the Danube Delta coastal area, within the last few years. Images acquired during very high Suspended Particulate Matter (SPM) concentration situations help us quantify the effects of precipitation patterns changes on the rivers solid discharge characteristics and their impact on the coastal area. One of the first challenges consisted in constructing a well adapted regional algorithm to estimate SPM based on remote sensing reflectance values. This was a necessary step since other global algorithms didn't perform satisfactory in the western Black Sea basin. Based on in-situ measurements and OLCI data, a robust algorithm was developed and consequently applied to image time series. The subsequent analysis performed on this image series showed that these high SPM events have a relatively short duration, of only a couple of days. During these periods, the SPM concentration can reach extremely high values, tens of times higher than mean values. It was also observed a tendency for the river plume to have a reduced spatial extension, compared to other situations. These patterns have significant effects on regional sediment dynamics in the coastal area, such as potential increased sedimentation rates closer to the river mouths or interference with normal long-shore sediment drift mechanisms.
Keywords: Black Sea, chlorophyll, phytoplankton, algal bloom, upwelling.
Earth Observation through satellite data has been an important tool in the study and monitoring of the biosphere and its components (land, ocean and atmosphere). The Ocean is covering 72% of the Earth surface and it is supplying half its oxygen, which makes the ocean our planet's life support system. Continuous monitoring of ocean color is necessary to understand water quality, water composition and its ecological status. Remote sensing is ideal for ocean color monitoring because it provides information over large areas at great temporal resolution.
The Black Sea is a semi-enclosed continental sea and it is connected to the planetary ocean through the Mediterranean Sea (Bosphorus Strait). The Black Sea is an unique marine ecosystem with about 87% of the Black Sea water mass being anoxic and containing high levels of hydrogen sulphide. The Romanian coastal shelf zone has shallower waters than the rest of the basin, and it receives a very large input of fresh water from the Danube River. The fresh waters discharged by the Danube River creates an overabundance of nutrients (phosphorus and nitrogen) which leads to the increase of phytoplankton biomass and to algal blooms. Nutrients in the ocean are cycled by a process known as biological pumping, whereby plankton extracts the nutrients out of the surface water and combines them in their organic matrix. Then when the plants die, sink and decay, the nutrients are returned to their dissolved state at deeper levels of the ocean. To monitor the phytoplankton dynamics in water bodies using remote sensing techniques, the Chl a (chlorophyll) proxy is used. Chl a is detected using spectral ranges between 430 and 630 nm, and close to 700 nm.
In addition to the nutrient input brought by the fresh waters of the Danube, water temperature plays a significant role in determining Chl a dynamics. Water temperature does not directly influence algal blooms, but is an indicator of the upwelling processes. During an upwelling process the deep, cold and nutrient-rich waters rises toward the surface and replace the warmer, nutrient-depleted waters. Upwelling creates a favorable environment for phytoplankton growth and algal blooms.
The purpose of this study is to monitor phytoplankton dynamics in the north-western basin of the Black Sea and to identify if there is a significant link between algal blooms and temperature changes. The products used for this study are provided by Copernicus Marine Service. Statistical data were extracted from these products and were analyzed in order to determine the relation between Chl a and temperature during upwelling processes. The study underlined preliminary results, based mainly on this data and a more in-depth analysis will continue, using also other type of data.
Humankind has had and will continue to have an inherent reliance on Earth’s coastal waters, be it for food, transportation, or human health. Throughout history, almost all human civilisations have settled by the coast, and as a result, the majority of modern cities now follow the same geographical pattern; a cityscape radiating from an arterial river flowing to a coastal bay. Coastal waters are subject to multiple pressures caused by anthropogenic disturbances but also by natural processes that have influence on both local and global scales. These perturbations are leading to deterioration in the quality of water resources, loss of biodiversity and threatening many coastal ecosystems. In the case of Ireland, the quality of coastal waters suffers several pressures related to human activities, mainly related to agriculture and urban wastewater discharges. The Environmental Protection Agency´s latest Water Quality Report (2020) reported that total nitrogen and phosphorus from river runoff have increased by 26% and 35% respectively since 2012-2014 (Trodd et al., 2021). For these reasons, understanding the underlying processes in these heavily pressured environments is of paramount importance for adequate management and sustainable use.
This work is carried out as part of the Science Foundation Ireland funded ‘Prediction of Irish Coastal Transformations’ (PREDICT) Project. The project involves researchers from several Irish universities and state bodies. It aims to take a holistic multidisciplinary approach to the understanding of some key marine and terrestrial environmental processes occurring in Dublin Bay, located on Ireland’s East Coast. The aquatic environment of Dublin Bay is classified as coastal transitional waters, influenced by two main rivers, the River Liffey and the River Tolka. Given the relatively high population density of Dublin City with 1.43 million persons in April 2021 (28.5 % of Ireland´s total population) (CSO, 2021), there are a number of significant natural and anthropogenic influences on the water quality conditions of Dublin Bay. These include terrestrial runoff, pollution from industrial sources, and influence from a wastewater treatment plant. Port activities from Ireland’s busiest port, Dublin Port, such as dredging events and heavy maritime vessel traffic, also influence the water quality in the bay. The river discharges into the bay increase the nutrients levels that, together with other activities such as raw sewage and industrial pollution, cause phytoplankton blooms in the region, as noted by O'Higgins and Wilson (2005). Despite these numerous pressures, Dublin Bay is of significant ecological importance, being classified as a UNESCO Biosphere Reserve covering approximately 30,000 ha, encompassing the bay and the protected Bull Island nature reserve.
Due to the economic, ecological, and social importance of Dublin Bay, assessing and monitoring its water quality is of vital importance. However, the lack of data related to in-situ water quality parameters is quite remarkable in Dublin Bay. Only recently a few fixed buoys, located in the external part of the bay, were set up for water quality monitoring. Satellite data offers great potential for water quality monitoring in Dublin Bay, providing temporal and spatial continuity of optical active constituents. However, satellite data still needs to be exhaustively tested in Irish waters.
The aim of this work is the validation of several water quality indicators, chlorophyll-a and turbidity, derived from remote sensing data in the area of Dublin Bay. Both chlorophyll-a and turbidity are important water quality parameters which can be monitored by remote sensors due to their absorption and scattering properties. Both parameters are also important to monitor and assess the Environmental Status (ES) and their effective monitoring is required for European Union Directives such as the Marine Strategy Framework Directive (MSFD). This work will mainly examine the effectiveness of the products derived from the Sentinel-2 and Sentinel-3 missions. Sentinel 2 MSI is primarily a multispectral land imaging satellite sensor hosting 13 spectral bands ranging from 492 nm to 1377nm, with a 10-meter, 20-meter to 60-meter spatial resolution. Despite being mainly applicable to land imaging, this sensor has also proven to be effective for studying inland and coastal waters, given its desirable medium spatial resolution, high radiometric resolution, and adequate multispectral band designations for aquatic applications. Sentinel 3 OLCI is a dedicated oceanic remote sensing satellite sensor with 21 spectral bands ranging from 400 nm to 1020 nm, with a full spatial resolution of 300 meters. Both missions provide high temporal resolutions, Sentinel-2A together with Sentinel-2B offer revisit times of 5 days at the equator and 2-3 days at mid latitudes like Ireland. In the case of Sentinel-3A and Sentinel-3B, the revisit time is around 2-3 days at the equator and < 1.4 days at latitudes greater than 30 degrees.
The examined products include the products provided by the neural network-based Case 2 Regional CoastColour (C2RCC) processor and the recent Copernicus Marine Environment Monitoring Service (CMEMS) 100 meter and 300-meter products derived from SeaWiFS, MODIS, MERIS, VIIRS and Sentinel 3 and Sentinel 2 data. These products have proven to be effective in other parts of Europe, being validated by in situ data, however their effectiveness has not been extensively tested in Irish waters. Some examples of the C2RCC chlorophyll-a and total suspended matter products have been included as additional files. These have been derived from both Sentinel 2 and Sentinel 3 data for Dublin Bay. The products will be evaluated using in situ chlorophyll-a and turbidity (which is strongly related to Total Suspended Matter) data collected in Dublin Bay since 2016 up to the present day. The in situ data is available from the Environmental Protection Agency (EPA) and Dublin Port Company. EPA performs several field campaigns per year to collect chlorophyll-a samples, while Dublin Port Company registers turbidity data by several data buoys located at the outer extent of the bay. Field chlorophyll-a and turbidity measurements are collected over the four seasons, which allows satellite products to be tested seasonally. The results from this work will be vital in understanding the effectiveness of the current state of the art retrieval approaches for Chlorophyll-a and Turbidity from satellite data in Dublin Bay, maximizing efforts and avoiding duplicate work. They will inform on the applicability of these approaches to this environment, and on the seasonal effects which may affect these retrieval approaches’ effectiveness. These results will provide a benchmark to which novel retrieval approaches for these variables can be tested in Dublin Bay in the future.
The ability to reliably map these water quality parameters allow for the effective monitoring of the bay’s water quality and the ability to identify problematic ecological events such as harmful algal blooms (HABs). These events can negatively impact the region’s aquatic wildlife, aquiculture, and pose a risk to human health at the bay’s bathing waters. In essence, the ability to effectively monitor Dublin Bay’s water quality remotely will help to inform on the successful management of these waters and in environmental policy decision making. This is why it is of vital importance to validate the current remote sensing retrieval approaches of these optically active constituents for Dublin Bay.
References
O'Higgins, T.G. & Wilson, J.G. (2005) "Impact of the river Liffey discharge on nutrient and chlorophyll concentrations in the Liffey estuary and Dublin Bay (Irish Sea)", Estuarine, coastal and shelf science, 64(2), 323-334.
Trodd W., O’Boyle S., Barry J., Bradley C., Craig M., Deakin J., Kennedy B., Larkin J., Maher P., McDermott G., McDonnell N., McGinn R., Plant C., Smith J., Stephens A., Tierney D., Wilkes R; McGovern E. (2021) “Water Quality in 2020 an Indicators Report”, EPA.
CSO (2021) “Population and Migration Estimates, April 2021”, [online], 31 August 2021. Available at: https://www.cso.ie/en/releasesandpublications/ep/p-pme/populationandmigrationestimatesapril2021/ (accessed 26 November 2021).
Bio-optical models are usually developed and applied to case 1 waters, i.e., waters in which all the optically-relevant constituents covary with phytoplankton, which are usually open ocean waters. Semi-analytical algorithms for retrieval of absorption coefficients and backscattering and empirical algorithms for retrieval of chlorophyll-a — a common input variable in models to estimate phytoplankton size classes (PSCs) — are known to perform better in Case-1 waters. Most of the PSC models were also developed for application in oceanic waters, though some models have been successfully applied to more optically-complex waters (Case-2), such as coastal and continental shelf waters. In the latter cases, the authors usually carried out regional tuning of satellite-retrieval algorithms for chlorophyll-a and PSCs. Despite such efforts, satellite-based methods for retrieval of PSCs in Case-2 waters remains poorly studied, mainly due to the requirement of local tuning, which is often not feasible in most coastal waters due to the lack of in situ data. Considering this challenge, in this study we discuss the possibility of aggregating global coastal-water samples to fit PSCs models, with the following specific objectives: (1) to analyse the differences in pigment characteristics between Case-1 and Case-2 waters, hereafter referred to as oceanic and coastal waters; (2) to test if oceanic waters are statistically more similar as a group, than coastal waters; (3) to test if the satellite-based chlorophyll-a estimations for Aqua MODerate Imaging Spectrometer (MODIS) have lower uncertainties in oceanic waters than for coastal waters. We used HPLC data for 1989-2020 from the NASA SeaBASS archive for the study. Samples collected more than 50km from land were classified as oceanic and samples collected within 50km from land were classified as coastal waters. Samples collected from depths greater than 10m from surface, inland station locations, and chlorophyll-a "values > 1000" mg m-3 were not considered in the analysis. In addition, when there were data from the same date and geographic location only those closest to the surface were retained, to avoid duplicates. After applying these criteria, the dataset consisted of a total of 10116 samples, with 4682 samples from coastal waters and 5434 from oceanic waters. T-test and Wilcoxon test were applied to diagnostic pigments (fucoxanthin, peridinin, zeaxanthin, total chlorophyll a and b, alloxanthin, 19’-hexanoyloxyfucoxanthin and 19’-butanoyloxyfucoxanthin) normalized by chlorophyll-a, to compare coastal and oceanic waters. Cluster analysis (k-means) combined with the elbow method was applied to test if coastal waters presented more differences within clusters than oceanic waters. A principal component analysis was also applied to oceanic and coastal-water subsets. In this analysis we included the concentrations of divinyl chlorophyll-b, in addition to the other diagnostic pigments. Finally, daily NASA MODIS Level 3 products with 4km resolution were compared with in situ chlorophyll-a for both subsets to test if there is a significant difference between the performances of satellite algorithm for both groups. The results of the statistical tests showed a significant difference ("p-value < 0.01") between coastal and oceanic waters for all diagnostic pigments. The k-means analysis and the elbow method indicated that the subset of oceanic waters presented slightly higher variance within clusters than coastal waters. The principal component analysis indicated that oceanic waters can be further divided into tropical/subtropical and polar/subpolar regions. Coastal waters did not present this pattern, suggesting that other factors might be at play, such as seasonal variability, and local characteristics of regions where the samples were collected. MODIS estimations of chlorophyll-a performed better in oceanic waters, presenting lower errors than in coastal waters. Despite the relative smaller variances in the coastal waters subset, we have not found a clear pattern to aggregate them in order to fit a PSC model. Recent studies have found that adding other variables, such as sea-surface temperature, wind speed, and sea height anomalies as inputs to PSC models can also improve their performances. In addition to this, further investigation is required to include other factors to partition the sample clusters to improve our insights into developing broadly-applicable approaches for detecting PSCs in coastal and continental-shelf waters. However, efforts to increase in situ data are still required to advance PSCs satellite-derived.
Erupting Volcanoes may contribute nutrients to the Ocean through lava-inflow-induced upwelling and ash depositions. Subsequently, volcanic eruptions cumulatively impact global ocean primary production, especially in nutrient-depleted regions. Nutrient-availability is one of several key-factors influencing sun-induced chlorophyll-a fluorescence (SIF) where nutrient-depletion leads to elevated SIF. Disentangling the effects of nutrient availability on SIF from other factors such as non-photochemical quenching (phytoplankton avoiding photo-damage from too much radiation) is still ongoing research.
In this study we focus on the Nishinoshima Volcano in the Pacific, roughly 1000 km south of Tokyo. The volcano erupted effusively in December 2019 with lava flows and sporadic ash plumes. From June to August 2020 the volcano was in a more explosive phase and produced continuous ash plumes up to a a height of 8 km. Almost coinciding, satellite observations show elevated levels in chlorophyll-a (Chl-a) in large structures around Nishinoshima, indicating phytoplankton blooms. These blooms also disappeared when explosive activities and ash plumes calmed down. After that higher Chl-a concentrations were only observed in the imediate surroundings of Nishinoshima Island. In October 2021 the volcano again produced ash plumes and large Chl-a structures were visible again.
The effusive and explosive phases of Nishinoshima's recent eruption history present a formidable study site for volcano-biomass interactions and for the relationship between nutrient availability and Chl-a concentrations and SIF.
Using the Ocean and Land Colour Instrument (OLCI) onboard Sentinel 3 in combination with observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra we want to study changes in SIF and set these in relation to changes in Chl-a and absorped photosynthetically active radiation (PAR). In order to retrieve SIF estimates we applied the Ocean Colour Fluorescence Product algorithm (OC-Fluo) to OLCI Level 1 radiances as well as observations which were produced with the POLYMER atmospheric correction algorithm. This yields us with th fluoerscence peak height (FPH) as indicator for the amount of SIF emitted by the phytoplankton blooms.
By combining observations of the ash plume positions and areas of ash deposition as well as estimates of CHl-a and SIF we want to analyse and understand the mechanisms behind the phytoplankton blooms and the dynamics in SIF responses to nutrient repletion and nutrient depletion from a volcanic source.
Ocean colour remote sensing is a key tool for monitoring phytoplankton in the global ocean. Phytoplankton – and chlorophyll as the proxy for phytoplankton biomass – respond rapidly to changes in their physical environment, making them an Essential Climate Variable for monitoring the health of the ocean. For this reason, accurate retrieval of chlorophyll from satellite continues to be on the forefront of the ocean-colour community objectives.
Several environmental factors can affect the accuracy of satellite chlorophyll measurements. Optically-complex coastal waters present a distinct challenge, due to TSM and CDOM absorption and back-scattering interfering with the retrieval of the water-leaving signal due to phytoplankton. Great effort has been deployed in the development of atmospheric correction and ocean-colour algorithms with the goal of decreasing the uncertainty in coastal regions.
The objective of this work is to asses a variety of widely-used atmospheric correction and chlorophyll algorithms for Sentinel-3A and Sentinel-3B OLCI in global coastal waters. A range of atmospheric correction processors for OLCI (the standard BAC, POLYMER, SeaDAS, C2RCC, regional processors) will be applied to full resolution level 1 data, together with several chlorophyll algorithms (OC4ME, OCNN, OC5, regional algorithms), to produce Sentinel-3A and -3B OLCI chlorophyll.
Using these datasets, we will perform a comprehensive accuracy assessment of Sentinel-3A and Sentinel-3B OLCI level 2 chlorophyll products over a range of conditions. Satellite chlorophyll retrievals will be assessed against in situ from April 2016 to the present. To minimise the effects of ocean variability in the inter-comparison, we will employ a robust match-up procedure that takes into account quality and temporal/spatial homogeneity issues. The performance of the two sensors will be assessed in both independent and coincident match-up analysis. Finally, we will investigate if chlorophyll retrievals can be improved by implementing an algorithm switching approach, particularly by applying NIR-red reflectance-based and Colour Index algorithms at high and low chlorophyll concentration ranges respectively.
Over the past 30 years, altimetry has revolutionised our ability to monitor surface conditions, quantify changes of the world’s ice masses and its impact on sea level, water availability, and glacial risks. With two high resolution altimeters currently active – the interferometric radar altimeter CryoSat-2 and the laser altimeter IceSat-2 – the present period offers a unique opportunity to co-exploit the observations made by the two sensors and improve the monitoring of ice height and trends.
Recent advances in swath altimetry processing, using the interferometric synthetic aperture radar (SARIn) mode of CryoSat-2, have enabled improved spatial resolution of surface elevation. Meanwhile, IceSat-2 provides enhanced resolution compared to the previous generation thanks to its six laser beams. However, Radar and laser altimeters have different intrinsic properties and behaviours. Joining and interpreting their measurements requires careful consideration of factors such as differences in electromagnetic interaction with the surface, impact of weather, and footprint size.
Here we use a Deep Neural Network to combine elevation measurements acquired by ESA’s CryoSat-2, SARIn waveform parameters, NASA’s Operation Ice Bridge, IceSat-2, and surface conditions over the Greenland Ice Sheet. We explore the difference between radar and laser altimetry and its relationship with surface condition, the impact of penetration of radar waves into snow and firn, and the respective measurement uncertainties.
While neural networks have been increasingly utilised in a wide variety of academic and commercial applications, their use for correcting elevation bias within the cryosphere is novel. The modelled elevation correction will be used to generate time-dependent Digital Elevation Models. Finally, we explore the potential to map ice, snow and firn surface conditions based on the relative differences between laser and radar instruments.
CRYO2ICE is a campaign that increased the ground track intersection of the CryoSat-2 and ICESat-2 satellites over short time intervals. The orbit change of CryoSat-2 is an enabling step for applications that require co-incident data, such as measuring snow depth and capturing the temporal variation. In addition to aligning the satellites, an efficient data access layer to identify coincident CryoSat-2 and ICESat-2 data measurements is required.
The Cryo2Ice Coincident Data Explorer enables users to visualise the spatial intersections for a given time window and download the ESA and NASA data products. The website’s unique temporal separation time functionality allows users to select intersects that are close in time (e.g. 3 hours) up to a 28 day separation, providing useful application for both Sea Ice and Land Ice. The intuitive and reactive interface enables users to choose the area of interest by specifying a bounding box or drawing a polygon. Easy to use download scripts allow users to download only their intersecting data, saving a large amount of both download time and data preparation time. The provided KML file allows users to save the visualisation component of their results and view it at a later time. The robust implementation makes it possible to run large queries in batch mode and receive the results via email.
The website provides access to CryoSat-2 LRM, SAR, and SARIn mode data and ICESat-2 ATL06, ATL07, ATL10, and ATL12 products. Combined datasets for CryoSat-2, linking the LRM, SAR, and SARIn L2 products, and for ICESat-2, linking the ATL06 and ATL07 products, allow users to view and intersect these products at the same time. Predicted ground track data is available, allowing users to view and plan for future CryoSat-2 and ICESat-2 satellite passes and intersections. The portal additionally provides access to CryoTEMPO-EOLIS Point and Gridded products.
Airborne missions such as Operation IceBridge data sets are available, with the CryoVex data and its following campaigns coming in the future.
The website allows intersections to be computed between any pair of available datasets from different missions, as long as they coincide in time. One could intersect the ICESat-2 ATL07 product with CryoSat-2 SAR mode data, or go outside of the CRYO2ICE scope and intersect airborne Operation IceBridge data with EOLIS Point Product’s swath elevation measurement. Each dataset available on the website can also be queried in single mode, allowing users to view the data independently rather than searching for intersections.
We will demo the Cryo2Ice Coincident Data Explorer and gather feedback for additional features, capabilities and datasets. A new combined CryoSat-2 and ICESat-2 product, created to make coincident data even easier to use and download, will also be presented.
The orbit manoeuvre known as CRYO2ICE of CryoSat-2 to align periodically with ICESat-2 allows for unprecedented near-coincident radar and lidar observations, conducted with the primary objective of investigating snow depth from space. Snow depth estimates from space have been acquired from passive microwave radiometers and by using dual-frequency observations (Ku- and Ka-band, or laser and Ku-band). However, dual-frequency observations have until now only been based on monthly averaged estimates at basin-scales. CRYO2ICE presents the possibility of investigating along-track snow depth using observations at various wavelengths along with an opportunity for further investigation of penetration capabilities and footprint-related issues.
We examine the impact of surface roughness on the radar and laser freeboards from CryoSat-2 and ICESat-2 (CRYO2ICE observations) and the resulting snow depth observations. Our particular focus is on how the CryoSat-2 derived elevations respond to increased surface roughness and how this effects the snow thickness retrieval. This study will investigate parameters that describe the surface roughness relevant to ICESat-2 and CryoSat-2, and show comparisons of radar altimeter waveforms, laser surface observations and auxiliary data.
Some of the most noticeable differences between CryoSat-2 and ICESat-2 are measurement configuration and sampling rates. While CryoSat-2, a synthetic aperture radar (SAR) altimeter, has higher resolution compared to pulse-limited radar altimeters, the footprint in CryoSat-2's SAR mode of approximately 400 m x 1500 m is still significantly larger than that of ICESat-2. ICESat-2’s photon-counting six-beam measurement configuration allows for high-frequency sampling of the surface topography (one beam has a nominal footprint of 10 m, sampled every 0.70 m). This difference in measurement configuration between retrieving surface elevation using conventional ways such as re-tracking the SAR radar waveform and re-tracking the surface from high-density photon clouds, as well as the difference in sampling rates presents additional challenges. ICESat-2 can retrieve local-scale sea ice features such as sea ice ridges, while CryoSat-2 will observe ridges as mere surface roughness that alters the radar waveform. As such, the impact of surface roughness on the resulting surface elevation cannot be neglected since the footprints are different which in turn might lead to a difference in the retrieved surface elevation along the track – which is not only caused by penetration at the different wavelengths.
Following the orbital alignment of CryoSat-2 with ICESat-2 in July 2020, also known as the CRYO2ICE campaign, near-simultaneous radar and lidar measurements over polar areas are available at approximately every 31 hours. This enables the retrieval of snow depth on sea ice by differencing measurements from the two altimeters, albeit at either coarse spatial resolutions or monthly scales. Due to the presence of land masses and intersecting waterways, however, the approach used for the Arctic Ocean cannot be applied to many areas in the Canadian Arctic because of mixed altimetric signals that may partially originate from land. This study integrates decametre-resolution Synthetic Aperture Radar (SAR) imagery (e.g., Sentinel-1 and RADARSAT Constellation Mission) with CRYO2ICE measurements to estimate near-shore winter snow depth on landfast sea ice in the Canadian Arctic.
A machine learning (ML) algorithm is developed to estimate winter snow depth on landfast sea ice in the Canadian Arctic with the first-order assumption that rougher sea ice, which is signified by higher SAR backscatter, would entrap thicker snow. Based on SAR data (HH and HV, or VV and VH), the algorithm learns snow depth for predefined SAR backscatter classes for landfast first-year ice (FYI) and multiyear ice (MYI) at near-coincident CRYO2ICE tracks. The ice classes where CRYO2ICE tracks are located are used as a training set to derive an instantaneous model needed to estimate near-shore snow depth on sea ice of that ice class. The ice types within the SAR scene are obtained operational ice charts published by the Canadian Ice Service.
Preliminary results indicate some correspondence between the altimetric surface heights and SAR backscatter coefficient (sigma-nought). Using our proposed method, CRYO2ICE snow depth estimates on sea ice in the Canadian Arctic are validated against near-shore in situ observations and compared with other snow depth on sea ice products throughout the winter, enabling longer-term CRYO2ICE validation.
Our SAR-ML approach can improve the temporal resolution of snow depth estimates from CRYO2ICE. For the best case, at Alert, Nunavut (82.5° N), there are about 5 cross-tracks per month, with substantially less for sites further south. Therefore, extrapolating the CRYO2ICE estimates, using the same ML modelling technique, can be done throughout SAR images that encompass sufficient CRYO2ICE estimates. This provides greater spatial coverage for the estimates, effectively increasing the temporal frequency for many areas. The use of alternative satellite data sources such as SARAL/AltiKa for snow surface height and Sentinel-3-SRAL for sea ice height for retrieving snow depth is explored to increase the number of cross-tracks within the Canadian Arctic for our study.
ESA’s Earth Explorer CryoSat-2 precisely measures the changes in the thickness of marine ice floating on the polar oceans and variations in the thickness of the vast ice sheets that overlie Greenland and Antarctica. The data delivered by the CryoSat-2 mission completes the picture to determine and understand the ice role in the Earth system in general and climate change in particular. For this, the quality of the satellite orbit, the measurements of the altimeter, and all required corrections have to meet the highest performance; not only over the ice caps and sea-ice surface but also over the oceans. As Cryosat-2 ocean products continuously evolve they need to be quality controlled and thoroughly validated via science-oriented diagnostics based on multi-platform in situ data, models and other (altimeter) satellite missions. The rationale for this is based on the new CryoSat-2 scientific roadmap, which specifically addresses the key technical and scientific challenges related to the long-term monitoring of sea-level and ocean circulation changes in the context of Global Warming. This also involves opportunities for synergy with missions like ICESAT-2 and the upcoming Copernicus CRISTAL mission.
In this context, the objective of our research is the long-term monitoring of the level-2 CryoSat-2 Geophysical Ocean Product (GOP), by evaluating the stability of the measurement system and identifying potential biases, trends and drifts over the ocean, through calibration and comparisons with concurrent ocean altimeter data, supported by the Radar Altimeter Database System (RADS). Independently, we also address this by comparing the GOP geophysical parameters with external models and in situ measurements such as the ones from selected sets of tide gauges. The very precise determination of the orbital height is part of the research activity but dealt with in a separate paper.
For our activity we persistently monitor, analyze and identify systematic errors in the observations, estimated (trends in) biases in range, significant wave height, backscatter, wind speed and sea state bias, and timing biases. An important finding is that GOP CryoSat-2 Baseline C data seem to have a range bias of -2.82 cm and no apparent drift w.r.t. altimeter (Jason) reference missions (< 0.1 mm/yr). The comparison with tide gauges is based on monthly averaged sea level from the PSMSL archive, for which we conclude that GOP data has a correlation of better than 0.84 with a selected set of 185 PSMSL tide gauges, a mean standard deviation better than 5.8 cm, and an average drift of -0.19 mm/yr, which translates to an overall drift of +0.11 mm/yr when taking a global GIA correction of +0.3 mm/yr into account. We conclude that Cryosat-2 GOP represents a (long-term) stable measurement.
Mass changes of the polar ice sheets and their contribution to global mean sea level rise is an essential climate variable and critical for adaptation planning. It is also essential for understanding the earth system in a warming climate. To ensure long term continuation of ice elevation and change records ESA proposed the Copernicus Polar Ice and Snow Topography Altimeter mission (CRISTAL) to be launched in 2027. CRISTAL, for the first time, will carry a dual-frequency altimeter in Ku and Ka Band to monitor changes in the height of ice sheets and glaciers and thickness of and snow on sea ice. The CRISTAL altimeter will have interferometric capabilities at Ku-band and non-interferometric capabilities at Ka-band frequency.
Therefore, with the upcoming CRISTAL mission it is absolutely necessary to focus on sensor specific characteristics and its impact on elevation change records beforehand. To tackle this questions we will determine surface elevation change rates with a focus on the last decade of satellite altimetry observations. Currently, with CryoSat-2, Sentinel-3 and SARAL/Altika three radar altimeter missions and with ICESat2 one laser altimeter mission are in orbit at the same time. Each of them with sensor specific characteristics. While CryoSat-2 and Sentinel-3 are measuring in Ku-Band, SARAL/Altika is equipped with a Ka-Band altimeter. Furthermore, Sentinel-3 offers products in SAR as well as PLRM, CryoSat-2 in LRM and SAR/SARIn and SARAL in LRM only.
We aim to make use of all missions and inter-compare elevation change rates and its accuracy derived from a consistent radar processing scheme, starting from the Level-1B waveform product. For ICESat2 we will use the ATL06 product. We will present regional differences in monthly elevation change derived from a number of different processing schemes (retracking, slope and trend corrections) for each individual mission.
In addition, we will demonstrate the impact on volume change records if the area south of 81.5°S (Sentinel-3) is not observed and why a continuation of observations up to 88°S (CryoSat-2) as planned for CRISTAL is essential for decades to come.
Arctic sea ice thickness measurements from CryoSat-2, and more generally radar altimeters, are typically limited to the winter months. In the summer months, the ice pack is also continuously evolving, causing varied surface scattering conditions throughout the summer. In particular, meltwater ponds at the sea ice surface produce strong specular reflections in a similar way leads do, making it challenging to distinguish between the two surface types. Recently progress has been made in summer sea ice lead detection in CryoSat-2 data, using deep learning and local variations of parameters. For the first time this has enabled pan-arctic summer radar freeboard measurements from CryoSat-2. However, the presence of melt ponds on the surface of the ice causes biases in the freeboard measurement. Over melt pond covered ice, the principal scattering horizon is generally referenced to the surface of the more specular melt ponds. As melt ponds tend to be below the mean ice floe surface, a bias will be added to the range measurement over ice floes. This type of bias will be larger over rougher sea ice over summer sea ice floes biases the range high, resulting in an underestimation of freeboard. Additionally, radar returns from leads may comprise reflections from ponds located closer to the nadir point than a nearby lead, causing errors in the retrieved sea surface elevation. As part of the Cryo-TEMPO project, we leverage the CRYO2ICE campaign and use crossovers of CryoSat-2 and ICESat-2 over the summer months in the Arctic Ocean. Towards the middle and end of the summer, when the snow has melted from the ice surface, and any difference in elevation measurements will likely be due to biases in the CryoSat-2 data relating to surface melt ponds. Thus, we quantify this bias and compare the height and reflectance statistics of the two sensors at locations identified by CryoSat-2 as either ice floe or leads at crossover locations. We then illustrate how this can be used in the conversion to ice thickness calculation.
The 2010’s have seen the emergence of the delay-Doppler altimetry, and the technology is now operationally deployed in most of the current and upcoming altimeter missions. In the CryoSat-2, Sentinel-3 and Sentinel-6 ground segments, a so-called unfocused Synthetic Aperture Radar (SAR) processing is performed over a limited number of successive pulses (64-pulses bursts of a few milliseconds in length). Compared to conventional altimeters operating in Low Resolution Mode (LRM), the along-track footprint is reduced from several kilometers, down to ~300 meters. Thanks to this enhanced resolution, substantial improvements have been reported in the performances obtained over the polar ice sheets. In particular, the footprint reduction provides access to finer along-track scales of the surface topography. Moreover, errors linked to the measurement relocation, from nadir to Point Of Closest Approach (POCA), are reduced.
More recently in nadir altimetry, the coherent processing has been extended to the whole illumination time of the surface, as inherited from SAR imagery. The concept was introduced as the Fully-Focused SAR (FF-SAR) processing by Egido and Smith (2017). It brings a further reduction of the along-track resolution, up to the theoretical limit, approximately 0.5 meters for Sentinel-3. Over diffuse surfaces, the speckle noise is predominant at such a small spatial resolution. Consecutive FF-SAR waveforms are then averaged to reduce its impact on the measurement precision, with the drawback of spatial resolution increase.
In the frame of a project funded by ESTEC, the Sentinel-3 FF-SAR performances were assessed over different surfaces. In this poster we present a summary of the results obtained over the Antarctic ice sheet. Three main diagnoses were carried out. Firstly, the precision of the topography derived from FF-SAR measurement was assessed over lake Vostok. Secondly, the FF-SAR capability to retrieve fine topographic scales was evaluated over megadune fields. Thirdly, the accuracy and precision of the FF-SAR topography estimates was assessed at global scale over Antarctica, by comparison to ICESat-2. More than 100 000 space–time nearly coincident observations between Sentinel-3 and ICESat-2 ATL06 measurements were found and examined. For all the diagnoses made, the FF-SAR was evaluated at different spatial resolutions, and performances are compared to those obtained in PLRM and unfocused SAR. The optimal spatial resolution needed to optimize the trade-off between measurement precision and spatial resolution is analysed and discussed. This study provides first robust indications of the FF-SAR potential over ice sheets. The new perspectives brought by the FF-SAR for the ice sheet monitoring are analysed and discussed.
Launched in 2010 by the European Space Agency (ESA), CryoSat-2 is the first satellite mission carrying a pulse-limited radar altimeter with Synthetic Aperture Radar (SAR) capabilitie. As implemented in the CryoSat-2 Payload Data Ground Segment (PDGS), the unfocused SAR processing dramatically reduces the along-track footprint, from several kilometers to ~300 m compared to conventionnal altimeters operating in Low Resolution Mode. In addition, when activated, the second antenna of CryoSat-2 enables a SAR Interferometric (SARIn) mode processing to geolocate the radar returns within the SAR mode footprint over sloping surfaces. In this study, we present a new level-2 processing chain dedicated to the CryoSat-2 Synthetic Aperture Radar Interferometric (SARIn) measurements acquired over ice sheets. Compared to the ESA ground segment processor, it includes revised methods to detect waveform leading edges and perform retracking at the Point Of Closest Approach (POCA).
CryoSat-2 SARIn mode surface height measurements retrieved from the newly developed processing chain are compared to ICESat-2 surface height measurements extracted from the ATL06 product. About 250,000 space–time nearly coincident observations are identified and examined over the Antarctic ice sheet, and over a one-year period. On average, the median elevation bias between both missions is about −18 cm, with CryoSat-2 underestimating the surface topography compared to ICESat-2. The Median Absolute Deviation (MAD) between CryoSat-2 and ICESat-2 elevation estimates is 46.5 cm. These performances were compared to those obtained with CryoSat-2 SARIn mode elevations from the ESA PDGS level-2 products (ICE Baseline-D processor). The MAD between CryoSat-2 and ICESat-2 elevation estimates is significantly reduced with the new processing developed, by about 42 %. The improvement is more substantial over areas closer to the coast, where the topography is complex and surface slope increases. In terms of perspectives, the impacts of surface roughness and volume scattering on the SARIn mode waveforms have to be further investigated. This is crucial to understand geographical variations of the remaining elevation bias between CryoSat-2 and ICESat-2, and continue enhancing the SARIn mode level-2 processing
The CRYO2ICE orbit reconfiguration, performed in summer 2020, aligned the European Space Agency’s CryoSat-2 satellite with the National Aeronautics and Space Administration’s Ice, Cloud, and land Elevation Satellite-2 (ICESat-2), providing periodic near-coincident, multi-frequency altimetry data over the polar regions. In the first ~1.5 years of CRYO2ICE there have been more than 300 expected overlaps occurring over Arctic sea ice, providing ample opportunity for intercomparison and combined-altimetric studies. Despite the abundant coincident data over Arctic sea ice, there exists much fewer over Antarctic sea ice. While this is somewhat expected (as the CRYO2ICE orbit configuration was optimized to favor Northern Hemisphere coincidence) the maneuver has substantially reduced the overlaps occurring over Antarctic sea ice and has limited the opportunities for near-coincident studies therein.
Here, we provide an overview of the near-coincident ICESat-2 and CryoSat-2 data collected over Antarctic sea ice both before and after the CRYO2ICE orbital maneuver. By using multiple coincidence-finding procedures, we show how different definitions of an overlap can drastically impact the number and lengths of the overlaps. This presentation will utilize the available near-coincident ICESat-2 and CryoSat-2 data to compare Antarctic sea ice elevations and freeboards retrieved from the two sensors, as well as assess the CryoSat-2 physical retracking method put forth in Fons and Kurtz (2019) and Fons et al. (2021). This method utilizes a waveform model and a least-squares optimization approach to retrieve the air-snow interface elevation and snow freeboard over Antarctic sea ice. By considering many overlapping orbits, we are able to aggregate and statistically assess the CryoSat-2 retrievals across different regions and seasons.
Additionally, this presentation explores the impacts of slush near the snow-ice interface. Slush is commonly found on snow-covered Antarctic sea ice and can influence the dominant scattering horizon of CryoSat-2 returns. Here, we incorporate a simple slush layer into our physical waveform model in order to better represent snow-covered Antarctic sea ice under varying conditions.
References:
Fons, S. and Kurtz, N. (2019): Retrieval of snow freeboard of Antarctic sea ice using waveform fitting of CryoSat-2 returns, The Cryosphere, 13, 861-878, https://doi.org/10.5194/tc-13-861-2019.
Fons, S., N. Kurtz, M. Bagnardi, A. Petty, R. Tilling (2021): Assessing CryoSat-2 Antarctic snow freeboard retrievals using data from ICESat-2, Earth and Space Science, 8, 7, https://doi.org/10.1029/2021EA001728.
Glaciers are currently the largest contributor to sea level rise after ocean thermal expansion, contributing ∼ 30 % to the sea level budget. Global monitoring of these regions remains a challenging task since global estimates rely on a variety of observations and models to achieve the required spatial and temporal coverage, and significant differences remain between current estimates. Here we report the first application of a novel approach to retrieve spatially resolved elevation and mass change from radar altimetry over entire mountain glaciers areas. We apply interferometric swath altimetry to CryoSat-2 data acquired between 2010 and 2019 over High Mountain Asia (HMA) and in the Gulf of Alaska (GoA). In addition, we exploit CryoSat's monthly temporal repeat to reveal seasonal and multiannual variation in rates of glaciers' thinning at unprecedented spatial detail. We find that during this period, HMA and GoA have lost an average of −28.0 ± 3.0 Gt yr-1 (−0.29 ± 0.03 m w.e. yr−1) and −76.3 ± 5.7 Gt yr−1 (−0.89 ± 0.07 m w.e. yr−1), respectively, corresponding to a contribution to sea level rise of 0.078 ± 0.008 mm yr−1 (0.051 ± 0.006 mm yr−1 from exorheic basins) and 0.211 ± 0.016 mm yr−1. The cumulative loss during the 9-year period is equivalent to 4.2 % and 4.3 % of the ice volume, respectively, for HMA and GoA. Glacier thinning is ubiquitous except for in the Karakoram–Kunlun region, which experiences stable or slightly positive mass balance. In the GoA region, the intensity of thinning varies spatially and temporally, with acceleration of mass loss from −0.06 ± 0.33 to −1.1 ± 0.06 m yr−1 from 2013, which correlates with the strength of the Pacific Decadal Oscillation. In HMA ice loss is sustained until 2015–2016, with a slight decrease in mass loss from 2016, with some evidence of mass gain locally from 2016–2017 onwards.
The existence of supraglacial lakes influences debris-covered glaciers in two ways. The absorption of solar radiation in the water leads to a higher ice ablation, and water draining through the glacier to its bed leads to a higher velocity. Rising air temperatures and changes in precipitation patterns provoke an increase in the supraglacial lakes in number and total area. However, the seasonal evolution of supraglacial lakes and thus their potential for influencing mass balance and ice dynamics has not yet been sufficiently analyzed. We present a summer time series of supraglacial lake evolution on Baltoro Glacier in the Karakoram from 2016 to 2020. The dense time series is enabled by a multi-sensor and multi-temporal approach based on optical (Sentinel-2 and PlanetScope) and Synthetic Aperture Radar (SAR; Sentinel-1 and TerraSAR-X) remote sensing data. The mapping of the seasonal lake evolution uses a semi- automatic approach, which includes a random forest classifier applied separately to each sensor. A combination of linear regression and the Hausdorff distance are used to harmonize between SAR- and optical-derived lake areas, producing a consistent and internally robust time series dynamics. Seasonal variations in lake area are linked with the Standardized Precipitation Index (SPI) and Standardized Temperature Index (STI) based on air temperature and precipitation data derived from the climate reanalysis data set ERA5-Land. The largest aggregated lake area was found in 2018 with 5.783 km2, followed by 2019 with 4.703 km2, and 2020 with 4.606 km2. 2016 and 2017 showed the smallest areas with 3.606 km2 and 3.653 km2, respectively. Our data suggest it is a warmer spring seasons (April-May) with higher precipitation rates that lead to increased formation of supraglacial lakes. The time series decomposition shows a linear increase in lake area of 11.12±9.57 % per year. Although the five-year observation period is too short to derive a significant trend, the tendency for a possible increase in supraglacial lake area is in line with the pronounced positive anomalies of the SPI and STI during the observation period.
In response to global atmospheric warming, glaciers have experienced a worldwide retreat and severe mass loss at centennial scale. To estimate glacier volume and mass changes, the geodetic method is widely used from local to global scales. While this method is quite simple as it consists of differencing two or more multi-temporal digital elevation models (DEMs), there are several scientific challenges to consider especially when DEMs are derived from satellite optical imagery and radar data. Optical images have the main problem of cloud cover, which results in gaps in the DEM. This can be solved by using radar data, but radar penetration might lead to a biased estimate of glacier elevation changes, especially in accumulation areas. In addition, both optical and radar DEMs need to be co-registered to minimize systematic bias. Furthermore, the seasonal correction between the surveyed DEMs, as well as their difference in the original spatial resolution, must be considered.
To address these issues and to provide an overview of processing strategy for the assessment of glacier volume changes from geodetic methods, the Regional Assessments of Glacier Mass Change (RAGMAC; https://cryosphericsciences.org/activities/wg-ragmac/) launched an intercomparison experiment in October 2021. Participants in the experiment will estimate volume changes of selected glaciers from provided optical and radar DEMs (ASTER and TanDEM-X, respectively) by addressing the issues of DEM co-registration, outlier filtering, gap filling, radar penetration depth correction, and seasonal correction.
We present the results of the volume change estimation provided by the participants for the selected glaciers. Furthermore, where available, we compare the spatial results with airborne geodetic estimates, but also with calibrated glaciological mass-balance observations from reference glaciers. We also aim to compile and present good practices for estimating the uncertainties of individual sources and their propagation to overall sums of errors.
Earth’s cryosphere is undergoing extreme changes in response to a rapidly evolving climate. Societies are coming to the slow realization that they are now facing an altered future that will require coordinated mitigation, adaptation and retreat. Costal planners and water resource managers are looking to the community of glaciologists to provide robust projections of ice sheet and glacier change for the coming century. To do this, glaciologists require the most detailed, accurate and uniform record of land ice response to changes in environmental forcing over the observational record. They need this to advance still poorly known empirical relations that govern glacier response and to calibrate and validate numerical models of glacier response. Here we discuss the NASA MEaSUREs ITS_LIVE project that seeks to accelerate these efforts by providing comprehensive – unified records of land ice surface velocity and elevation over the full satellite record. The project ingests data from 16 satellites (Landsat 4/5/7/8/9, Sentinel-1A/B, Sentinel-2A/B, Geosat, ERS-1/2, Envisat, CryoSat-2, ICESat 1/2) to generate homogeneous records of change, with consistent and precise processing, file formats, projections, and all on a compatible nested grid. The project processes and delivers all of the data using cloud computing and storage infrastructure, allowing seamless mass processing and delivery. With 36 years of velocity records and 36 years of harmonized elevation time series, and counting, ITS_LIVE provides both context for, and timely records of, ongoing glacier and ice sheet change.
During glacier flow instabilities, ice velocities strongly increase over a limited period of time, typically less than a decade long. The processes leading to glacier flow instabilities are still poorly understood, and the liaison between glacier flow instabilities and the continued strong increase of average temperature in the Arctic is unclear. The increasingly available dense time series of satellite data support the investigation of dynamic glacier changes with unprecedented spatial and temporal detail. For a large number of glaciers in the Eastern Arctic (Novaya Zemlya, Franz-Josef-Land, Severnaya Zemlya and Svalbard), we computed dense time series of ice surface velocity from Sentinel-1 offset-tracking since 2015 and analysed short-term fluctuations in comparison to mean annual velocity. We found that for two glaciers on Novaya Zemlya, three glaciers on Severnaya Zemlya and more than ten glaciers on Svalbard, long-term trends in the Sentinel-1 ice surface velocity time series dominate over seasonal variability. Complementing the Sentinel-1 estimates with results obtained from Sentinel-2, Landsat-8 and Radarsat-2, we will discuss the peculiar characteristics of the ice surface velocity time series of dynamically unstable glaciers. In many cases, the typical Svalbard glacier surge cycle is observed, starting with a years-long period of steady acceleration overlain by seasonal velocity variations, followed by a months-long period of rapid acceleration, and a very gradual end of the fast-flow phase with velocity eventually decreasing again over a years-long period. In other cases, however, a years-long period of fast-flowing with strong seasonal variations or repeated fast-flowing periods of a few years are detected. For an improved glacier monitoring strategy in the Eastern Arctic using Sentinel-1 SAR data, 6 days repeat cycles, as currently ongoing only over Svalbard, are better suited to retrieve high quality data than the current 12 days repeat cycles in place over the Russian Arctic. This is because time series used to study dynamic instabilities contain more important details and the effects of ice and snow melting are less severe. In addition, acquisition gaps as occurred in 2018 and 2019 over Franz-Josef-Land should be avoided.
In the Karakoram, dozens of glacier surges occurred in the past two decades, making the region one of the global hotspots. Detailed analyses of dense time series from available optical and radar satellite images revealed a wide range of surge behaviours in this region: from slow advances characterized by slow ice flow over periods longer than a decade to short, pulse-like advances with high velocity over one or two years. Rather often, surging glaciers can be distinguished from glaciers advancing in response to climate forcing by analysing their elevation change pattern.
In this study, we present an analysis of three glaciers currently surging in the central Karakoram: North Chongtar, South Chongtar and an unnamed glacier referred to as NN9. All three glaciers flow towards the same region but differ strongly in surge behaviour. A full suite of optical and SAR satellite sensors (including Sentinel-1 and -2) and digital elevation models (DEMs) are used to (a) obtain comprehensive information about the evolution of the surges between 2000 and 2021 and (b) to compare and evaluate capabilities and limitations of the different satellite sensors for monitoring relatively small glaciers in complex terrain.
The analysis for (a) reveals a contrasting evolution of advances rates and flow velocities for the three glaciers, while the elevation change pattern is globally similar. South Chongtar Glacier shows advance rates of more than 10 km y-1, velocities up to 30 m d-1 and surface elevations raised by 200 m. In comparison, the three times smaller North Chongtar Glacier has a slow and almost-linear increase of advance rates (up to 500 m y-1), flow velocities below 1 m d-1 and elevation increases of up to 100 m. The even smaller glacier NN9 changes from a slow advance to a full surge within a year, reaching advance rates higher than 1 km y-1, but showing the typical surface lowering higher up only recently. These observations indicate that, despite similar climatic settings, different surge mechanisms are at play in this region, and that a switch can occur between the different surging mechanisms in the course of a single surge. Details of (b) sensor performance and inter-comparison are presented in a further study.
Glacier surges are episodes of massively enhanced ice flow speeds and glacier advances. There is a high concentration of such surging glaciers in the Karakoram. Due to the complicated access, surging glaciers in the Karakoram are mostly studied using space-borne remote sensing techniques, typically by analysing changes in surface velocity and elevation. Indeed, velocities derived from feature tracking in repeat satellite images can quantify the temporal development of ice flow while elevation change from two or multi-temporal DEMs can detect the mass redistribution pattern of a surge.
In this study, we compare multiple optical and radar sensors to estimate the temporal evolution of three glacier surges in the central Karakoram by analysing ice flow velocity and elevation changes. The glaciers under study are challenging to analyse due to their small size (800 to 250 m tongue width), steep lateral hillslopes, and different surge behaviour. For velocity analysis, we used multi-temporal imagery from Landsat, TanDEM-X (TDX), Sentinel-2 and Sentinel-1, and Planet. To quantify glacier elevation changes, SRTM, SPOT, and High Mountain Asia (HMA) DEMs are analysed and compared, as well as the elevation change products available from ASTER imagery (Hugonnet et al., 2021) and ICESat-2 elevation values.
Our sensor inter-comparison revealed a very good agreement between the velocity data derived from TDX and Landsat 8, Sentinel-2, and Planet for the largest of the three study glaciers. In contrast, Landsat is apparently inadequate to track displacements precisely for the two smaller glaciers. In Sentinel-1 intensity data it was possible to detect an increase in glacier crevassing, associated with the onset of the surges, and to follow the surges in an animation of repeat intensity images. Sentinel-1 showed however, poor performance even when testing different matching window sizes due to the small glacier widths and strong ice surface disruptions associated with the surges.
DEMs from SPOT and SRTM required co-registration, which was performed on stable terrain using the HMA-DEM from 2015 as a reference. The 2010 SPOT5 DEM (Gardelle et al., 2013) and 2015 SPOT6 DEM (Berthier and Brun, 2019) suffered from strong artefacts at steep slopes whereas a DEM we produced from SPOT6 data from 2020 reveals impressive quality. The DEM time series from ASTER imagery missed detecting both negative and positive elevation changes of the smaller glaciers, but for the largest glacier, elevation changes before its surge agree with to the other DEM differences. ICESat-2 only provides elevation profiles with varying locations, but the higher temporal resolution provided additional information on how the glacier elevation changed in between the scarce DEM time stamps.
The Himalayan glacier has its significance to influence regionally as well as globally. Glacier's vulnerability due to various climate aspects encourages to research scholars and scientists for cryosphere studies. Since cryosphere studies are being a prime concern to reduce the influence of it. The Himalayan glaciers are a mother of numerous perennial rivers in the Asian continent and are profoundly crucial from a socio-economic point of view. The continuous monitoring of the Himalayan landscape is important to accomplish the goal of conservation of the natural resource and achieve sustainable development. However, from a conservation aspect, the Himalayan cryosphere is the most sensitive and critical part of the Indian Himalayas. In this study, extracting the Snow Cover Area (SCA) and topography mapping of Khangri (Patliputra) glacier in Tawang valley near Gorichain mountain of the Eastern Himalayan Region (EHR). The variation in Snow Cover Area (SCA) of the region since from last 3 years via 2017, 2018 and 2019 datasets have tried to be estimated using Sentinel-2b satellite data which have 10 m resolution, velocity estimation using Sentinel-1datasets in 2018 and 2019, and differential global positioning system expedition in 2019 and visualize the changes that occurred. The result was the show, snow line of the glacier lies at 5160 m AMSL. The average SCA of Khangri glaciers for the years 2017, 2018, and 2019 were estimated to be 78.22 km2, 122.84 km2, and 100.41 km2 respectively. The SCA for the Khangri glacier throughout the study period was found to be in the range of 24.37 – 174.22 km2. The glacier, it has been found that glacier is receding with an average rate of 6.5± 3 m using the GPS/ DGPS in 2018 and 2019. The glacier velocity was observed to be varying from 0.032 to 0.62 m/day for the year 2018 and from 0.0037 m/day to 1.29 m/day with an average velocity of 0.099 m/day as evident from the SAR remote sensing data. For a better understanding of glacier dynamics of the India Himalayan Region (IHR), long-term continuous monitoring of the glacier parameters is therefore very much an essential glaciated area.
Keywords: Cryosphere, Velocity, IHR, SCA,
Several feature tracking methods for glacier velocimetry calculations has been proposed in the recent years, based on different kinds of satellite images, both optical and Synthetic Aperture Radar (SAR), seen the increasing interest in these features, being one of the most significant signals of Climate Change. Thus, their velocity monitoring acquires importance to gain a better understanding of their dynamics and evolution in space and time. This is possible by pairs or series of satellite images which can give information about their movement in a certain time, velocity per year or direction of the movement. In the present work, we compare different modules of feature tracking from diverse periods, from the nineties to the most recent years, and based on various methodologies and software, e.g., python codes, SAGA GIS software and the Sentinel Application Platform (SNAP). To these existing modules, we add for testing and comparing a new Machine Learning-based method, which couples rigid image registration and correlation methods in order to disentangle the real glaciers movement from the artifacts caused by the image acquisition and provide an accurate estimation of the glacier velocity and direction. The evaluation of this last model and the cross-comparison with the previous methodologies is possible by validation with field available data. In fact, we take advantage of existing field measurements of glacier velocimetry and displacement direction, which present long series in time from repeated GPS measurements, at least in two regions of the Earth, i.e., Polar and Alpine regions. In details, for example we can use field data from David glacier and its Drygalski ice tongue in Victoria Land, Eastern Antarctica, using data sampled by the National Antarctic Research Program (PNRA) of Italy, and from the Miage glacier in Aosta Valley, Western Italian Alps, using data acquired by the University of Milan. Regarding the satellite data, we use optical images from Landsat-family and Sentinel 2 satellites, with respectively 15 m (panchromatic band) and 10 m of spatial resolutions and SAR images from Sentinel 1 at 20 m spatial resolution. This field comparison allows to evaluate existing feature tracking modules in different regions of our planet (having then various characteristics) and glacier dimensions and to provide and validate a new method from Machine Learning techniques.
The Arctic region is recognized as the largest short-term contributor to the sea-level rise and one of the fastest-warming areas on the Earth [1]. Therefore, changes in the Arctic glaciers can be considered as visible evidence of the manifestation of climate change so that spatiotemporal changes of the Arctic ice masses are at the center of attention for scientific communities in recent years. The only method to obtain spatiotemporal coverage of the Arctic glaciers is the use of satellites. The independence of daylight and weather conditions have made the Synthetic Aperture Radar (SAR) the most suitable tool in the Arctic areas. Monitoring fluctuations in glacier facies, including firn, Superimposed Ice (SI), and Glacier Ice (GI), provides an opportunity for tracking climate change [2]. Several studies have shown that areas of firn, SI, and GI on Svalbard glaciers can be detectable with SAR datasets [3,4]. Nowadays, thanks to the availability of Sentinel-1 data from 2014, dense SAR satellite data time-series are available to map glacier fluctuations, varying in time.
This study presents a workflow for change detection of Arctic glaciers facies using the Sentinel-1 Ground Range Detected (GRD) dataset. The analysis utilizes dense time series of Sentinel-1 A&B GRD datasets from both ascending and descending orbits and in both HH/HV and VV/VH polarizations over Kongsvegen glacier, locating from latitudes 78° 43ꞌ to 78° 52ꞌ N and longitudes 12° 35ꞌ to 13° 30ꞌ E, Svalbard for a period of 2017 to 2020. In addition, the analysis utilizes the ground truth data, retrieved from a network of C-band ground penetrating radar (GPR) profiles oriented parallel to the glacier centerline, to label segmented images and calculate the accuracy of classifications. A glacier can be primarily divided into two areas: accumulation and ablation zones which are separated by the equilibrium line [5]. There are indications that this line is correlated with the equilibrium line [5]. Therefore, we show the strength of the classification results for yearly firn area variation to obtain the expected variation of firn area boundary with the SI zone.
The processing includes three main steps: pre-processing, classification, and post-classification. Sentinel-1 data were first pre-processed to drive geocoded backscatter intensity at each time point. By looking into weather records and backscatter profiles over the glacier facies, we carefully selected only images that are less affected by weather conditions to ensure that the glacier surface has been under dry and cold conditions. Then, the time series of backscatter profiles were analyzed for the three different glacier facies to understand the influence of climate conditions on radar backscattering. This was conducted independently for co-pol and cross polarizations. We observed images coinciding with the onset of rain in the meteorological records, and an image, taken after the onset of rain, clearly showed significant change compared to the day before. Based on our time series backscatter analysis and weather records, we selected dry-cold conditions of temporal averaging of Sentinel-1 data over four different years. The temporally averaged backscatter images for four different years and different polarizations together with GPR profiles were then fed into the random forest classifier for training. We have got very promising results in terms of classification accuracies which were all above 85% in most cases, though there were some errors due to terrain topography close to the edges of the glacier. Finally, post-classification change detection was applied over the classified result to present changes over different times and polarizations. Subsequently, yearly changes are identified as the detected difference in the locations of boundaries between glacier facies.
References:
[1] Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller. 2007. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and New York, NY, USA.: Cambridge University Press.
[2] Akbari, V., A. Doulgeris, and T. Eltoft. 2010. “Non-gaussian clustering of SAR images for glacier change detection” Proceedings of the ESA Living Planet Symposium, Bergen, Norway 2010, SP-686.
[3] Engeset, R.V., J. Kohler, K. Melvold and B. Lunden. 2002. “Change detection and monitoring of glacier mass balance and facies using ERS SAR winter images over Svalbard.” Int. J. Remote Sensing 23 (10): 2023-2050. Doi: https://doi.org/10.1080/01431160110075550.
[4] König, M., J. Wadham, J.-G. Winther, J. Kohler and A.-M. Nuttall. 2002. “Detection of superimposed ice on the glaciers Kongsvegen and midre Lovenbreen, Svalbard, using SAR satellite imagery.” Ann. Glaciol. 34: 335-342. doi:10.3189/1727564027818176173344, 335-342.
[5] König, M., J.-G. Winther, J. Kohler, and F. König. 2004. “Two methods for firn-area and mass-balance monitoring of Svalbard glaciers with SAR satellite images.” J. Glaciol. 50 (168): 116-128. Doi: https://doi.org/10.3189/172756504781830286.
Rock glaciers are the best visual expression of creeping mountain permafrost. Documenting and monitoring their rate of surficial deformation in time and space is critical for improving the reliability of permafrost mapping, as well for evaluating the ongoing effects of climate change on degradation-related geohazard potential.
Traditional, optically-based rock glacier inventories have been compiled without a quantitative assessment of rock glacier kinematics. The growing availability of remotely sensed data (e.g., Sentinel-1 images) makes the detection of their surface deformation feasible over large spatial scales, thus offering the opportunity to incorporate kinematic information in rock glacier inventories. However, the absence of a standardized methodological framework still prevents us from generating homogeneous and comparable inventories worldwide. The International Permafrost Association (IPA) Action Group on Rock glacier inventories and kinematics, launched in 2018, has fostered activities of a research network to bridge this gap. The ESA Permafrost_CCI project has further sustained this initiative, developing a standardized methodological scheme to systematically incorporate kinematic information in rock glacier inventories, derived from spaceborne InSAR data. Accordingly, spaceborne InSAR information is used to map identifiable areas of ground deformation (here defined as “moving areas”) within rock glacier boundaries. Moving areas are delineated on interferograms and assigned to velocity classes. Subsequently, a specific kinematic class is assigned to each rock glacier according to the velocity classes and extension of the relevant moving areas.
Nine operators of the ESA Permafrost_CCI network applied the proposed methodology to eleven inventories from selected regions of the European Alps, the Southern Alps, Greenland, Alaska, Norway, Svalbard, the Himalayas and the Central Andes. Collectively, these include more than 5,000 moving areas within more than 3,600 rock glaciers to which kinematic classes have been successfully implemented, despite the numerous regions and intensive manual effort. Preliminary results show wide variability across regions related to the variety of physiographic settings examined, but also related to InSAR detection issues.
Repeating this approach over long periods may allow assessing the response of a wide selection of landforms to climatic forcing. Furthermore, comprehensive rock glacier inventories will help to select representative rock glaciers of a region, on which to apply more accurate monitoring approaches.
More advanced techniques are required to reduce the limitations associated with interferometry, as the application of more advanced InSAR processing strategies, or different remote sensing technologies such as feature tracking on optical airborne images. Furthermore, the lessons learned from the current study are critical in refining the proposed method and applying it widely to more regions.
Changing climate conditions significantly influence the state and dynamics of glaciers worldwide, leading to implications for global sea-level rise, freshwater availability, and geomorphologic hazards. Ice dynamics and mass flow variations can globally be monitored by long- and short-term changes in glacier surface velocity. Consistent and continuous information on glacier surface velocities is an important parameter for time series analyses, numerical ice dynamic modeling, and glacier mass balance estimations. Thus, glacier surface velocities are defined as an Essential Climate Variable (ECV) by WMO for the polar ice sheets, but should be monitored on a regular and global scale also for other glacier systems. The Sentinel-1 constellation as part of the EU/ESA’s Copernicus program is acquiring repeat-pass Synthetic Aperture Radar (SAR) data since 2014. It enables global, near real time-like, and fully automatic processing of glacier velocity fields at up to a 6-days repeat cycle, independent of weather and solar illumination conditions.
We present a new near-global database of glacier surface velocities derived from Sentinel-1 imagery. It comprises continuously updated image pair velocity fields, as well as monthly and annually-averaged velocity mosaics at 200 m spatial resolution. We apply intensity feature tracking on both archived, new and upcoming Sentinel-1 acquisitions available from the ASF Archive. The products cover all major glaciated regions outside the polar ice sheets and are generated at an HPC (High-Performance Computing) environment at the University of Erlangen-Nuremberg. The velocity products and metadata are freely accessible via an interactive web portal (http://retreat.geographie.uni-erlangen.de) after registration. It can be downloaded and simple online analyses can be performed. More information on the database can be found in the publication Friedl et al. 2021 (doi: 10.5194/essd-13-4653-2021).
The database provides a unique database on temporally detailed information on glacier surface velocities, allowing the analysis and identifications of change patterns. We carried out a case study to implement an anomaly detection algorithm to identify glacier surges. The algorithm was tested and evaluated in Svalbard, Karakoram, and Pamir.
Meltwater from the cryosphere is vital for water supply and livelihood security of the local population in the Trans-Himalaya. Due to decreasing glaciers and the increasing variability of seasonal snow cover, periods of water scarcity regularly occur in summer and spring. The widely neglected cryosphere component of aufeis, a seasonal ice body created by successive freezing of flowing water onto the already frozen surface is mainly located along rivers and streams. It stores base flow in winter and supplements river discharge during spring and early summer. Although this particular cryosphere component has been described for sub-polar permafrost regions across the northern hemisphere, only few studies have investigated Trans-Himalayan aufeis formation (Brombierstäudl et al., 2021). Despite its possible importance for local hydrological systems, a better understanding of specific spatio-temporal freezing and melting patterns is lacking.
We mapped and analysed aufeis fields in the Tso Moriri basin from 2008 to 2021 using remote sensing data, including 151 cloud-free Landsat and 60 Sentinel-2 images from November to July. The combination of both datasets provides the advantage of a higher temporal resolution which allows more detailed analysis of this dynamic ice features. Daily temperatures were derived from MODIS 8-day Land Surface Temperature dataset from both, Terra and Aqua in order to calculate monthly averages over the whole time-series and a 13-year average. The differentiation of aufeis from seasonal snow cover during the accumulation period in winter is possible due to reduced albedo in the visible and near-infrared electromagnetic spectrum, which supported the identification of areas with a high degree of overflow during accumulation. The ice on each scene was classified using the Random Forest (RF) classifier, achieving an average overall Kappa coefficient of 0.82 after validation. We trained two separate RFs, one for Landsat and one for Sentinel-2. Training data was selected from 15 randomly selected scenes of each sensor to cover a wide range of spectral properties. In total, 665 (Landsat) and 438 (Sentinel-2) sample points were available for training of the RFs. Validation was carried out on separate datasets for each scene. To account for different numbers of observations, the monthly averages were used for area calculations per year.
In the study area 27 aufeis fields, which frequently reappear each year in the same places, with an average maximum extent of 9.2 km² in May were mapped, located at a mean elevation of 4700 m a.s.l. Size of individual aufeis ranges from 0.007 km² to 1.7 km². Based on the 13-year monthly average, an accumulation and depletion phase can be differentiated, which are negatively correlated with surface temperature derived from MODIS data. The accumulation period lasts from November until April, with a peak of monthly average area in January and February. Melting starts in May and aufeis fields disappear by the end of July. A slightly increasing trend in the yearly average ice covered area during the freezing period was found, whereas the maximum extent in May is consistent throughout the time-series with only a minor, non-significant downward trend. In addition, correlation analysis between monthly average overflow area and temperature suggests that temperature is an important variable regarding overflow activity. Temperatures above the 13-year average result in larger overflow areas compared to years with lower temperatures, especially during January, February and March whereas lower temperatures are more beneficial for ice formation in November.
This study shows the potential of a combined Landsat and Sentinel-2 remote sensing approach to map highly dynamic aufeis, define its spatio-temporal patterns and monitor its changes. Furthermore, it provides a reference frame for further studies in other parts of the Himalaya and beyond.
Mountain regions are well known for their sensitivity and dynamic response to ongoing climate change. Currently, many scientists are studying the impact of these changes on glacierized high mountain areas around the world. In the Stubai Alps (Tyrol, Austria), most of the glaciers are retreating, resulting in a reduction of the total glacier area and also the formation of glacial lakes in suitable topographical position. The Sulzenau Valley is facing a rapid loss of glacier ice. Since the early 2000s, the shrinking of one of the largest glaciers, the Sulzenauferner, has led to the formation and evolution of the proglacial Sulzenau Lake. In August 2017, the moraine part of the dam was breached and the lake released part of the retained water suddenly. Resulting glacial lake outburst flood (GLOF) damaged the power plant and water pipes of the downstream hut. Although the Stubai region is already well-investigated, the evolution of the very active Sulzenauferner Glacier and its proglacial lake are poorly documented. To fill this gap, the objective of this research is to collect multi-source data to reconstruct the evolution of the Sulzenauferner Glacier since 2000 and identify the preconditions, drivers and triggers that led to the 2017 GLOF. To this end, we combined multiple satellite and close-range remote sensing data to quantify changes of the glacier in space and time and the related growth of the lake. Firstly, based on optical images (e.g.: Sentinel 2, Google Earth images, in-situ pictures), we produced two detailed geomorphological maps of the Sulzenau valley, one before and another after the flood. The geomorphological maps illustrate the effects of the flood on the morphology of the lake and the channel. Secondly, we prepared the mosaic of very high-resolution images acquired with an Unmanned Aerial Vehicle (UAV) during a field campaign held in August 2021 and created a high-resolution Digital Elevation Model (DEM). The comparison of multi-temporal DEMs (i.e.: satellite and UAV-based) allowed us to map erosion and deposition areas and estimate the volumes of displaced material. Further, a Synthetic Aperture Radar Interferometry (InSAR) analysis and the computation of deformation time series is used to investigate the stability of the moraine dam and moraine slopes surrounding the lake. Although the 2017 GLOF of Sulzenau is considered rather a small-magnitude event, it is considered a representative study case of a dynamic high mountain environment affected by a rapid retreat of the glacier and associated processes and geomorphological responses.
The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was a limb-viewing infrared Fourier transform spectrometer that operated from 2002 to 2012 aboard the ENVISAT satellite. The final re-processing of the full MIPAS mission Level 2 data was performed with the ESA operational v8 processor. This MIPAS data set not only includes retrieval results of pressure-temperature and the standard species H2O, O3, HNO3, CH4, N2O, and NO2, but also vertical profiles of volume mixing ratios of the more difficult to retrieve molecules N2O5, ClONO2, CFC-11, CFC-12 (included since v6 processing), HCFC-22, CCl4, CF4, COF2, and HCN (included since v7 processing). Finally, vertical profiles of the species C2H2, C2H6, COCl2, OCS, CH3Cl, and HDO were additionally retrieved by the v8 processor.
The balloon-borne limb-emission sounder MIPAS-B was a precursor of the MIPAS satellite instrument. Hence, a number of specifications like spectral resolution and spectral coverage are similar. However, for essential parameters the MIPAS-B performance is superior, mainly NESR (Noise Equivalent Spectral Radiance) and line of sight stabilization, which is based on an inertial navigation system supplemented with an additional star reference system.
Several flights with MIPAS-B have been carried out during the 10 years operational phase of ENVISAT at different latitudes and seasons, including both operational periods where MIPAS measured with full spectral resolution (FR mode) and with optimized spectral resolution (OR mode). All MIPAS operational products (except HDO) are compared to results inferred from dedicated validation limb sequences of MIPAS-B. To enhance the statistics of vertical profile comparisons, a trajectory match method has been applied to search for MIPAS coincidences along 2-day forward/backward trajectories running from the MIPAS-B measurement geolocations.
This study gives an overview of the comprehensive validation activities and results based on the ESA operational v8 data comprising the MIPAS FR and OR observation periods. This includes an assessment of the data agreement of both sensors taking into account combined errors of the instruments.
The Karakoram in High Mountain Asia (HMA) is well-known for the clustering of surge-type glaciers, which are distinguished by the quasiperiodic flow patterns including a long-lasting quiescent phase and a short-lived active phase[1]. Investigating glacier surges helps to better understand local glacier evolution and to reduce glacier related risks such as glacier lake outburst flood (GLOF)[2]. However, the mechanisms of surging in the Karakoram Mountains are not yet fully understood due to the complexity of their dynamics.
In this work, we comprehensively characterized the recent surge of the South Rimo Glacier in Karakoram, which was observed between 2018 and 2020[3]. The South Rimo Glacier represents one of the largest glaciers in the east range of Karakoram. The surge showed combined feature of both thermal and hydrological regulated processes, and thus is a very interesting example for studying surge mechanisms.
To depict the dynamics of the surge, we employed multi-source remote sensing data including SAR data from TanDEM-X and Sentinel-1, as well as multispectral imagery from Landsat-8 and Sentinel-2. The main objective is to characterize the changes of the surface elevation, the flow velocity, and the surface thermal regime. These surface parameters were then used in numerical simulations to constrain the basal sliding conditions, so that we can quantify the hydrological and thermal controlling of the surge event.
The TanDEM-X COSSC data were acquired between 2011 and 2019 and were used to derive Digital Elevation Models (DEMs). The obtained DEMs were differentiated to calculate the glacier surface elevation changes before and after the surge. The Sentinel-1 images were acquired between 2017-2020 in both ascending and descending orbits. They were adopted to map glacier flow velocities using the offset tracking method. To improve the robustness of offset tracking, we employed the stacked cross-correlation instead of the traditional pair-wise cross-correlation when estimating offsets[4]. The multispectral imagery was applied for estimating glacier surface temperature, which was used as an auxiliary variable to control the glacier surface thermal regime. The surge mechanism was further quantified using numerical modelling, in which the observed surface elevation data and velocity maps were employed to constrain the basal sliding parameters of the glacier through an inverse modeling approach.
The DEM differencing results showed that the surge front started building-up since 2013. Flow velocity was found gradually increasing starting from 2017, initiating the surge in summer 2018 and reaching the maximum in the mid of 2019. From the Sentinel-2 images, we identified that a supraglacial lake was formed in July 2019 when the surge velocity was maximized. The lake drained in September 2020, and meanwhile the velocity dropped to the pre-surge level. The surface temperature profile derived from multispectral images showed heat anomaly before the surge, which is likely to have changed the thermal and hydrological status of the glacier. The numerical simulation revealed possible basal conditions that have caused the surge, and suggested that the surge controlling factors may involve the change of both the thermal regime and the subglacial hydrological conditions.
In our work, inclusive datasets that depict the surge dynamics of the South Rimo Glacier, and quantatitive investigation of the controlling mechanisms of the surge through simulations are presented. It is also highlighted that the multi-source earth observation data provides valuable inputs for numerical simulations, which allows for a better understanding of complex glacier systems through glacier flow modeling.
[1] D. J. Quincey, M. Braun, N. F. Glasser, M. P. Bishop, K. Hewitt, and A. Luckman, ‘Karakoram glacier surge dynamics’, Geophys. Res. Lett., vol. 38, no. 18, 2011, doi: 10.1029/2011GL049004.
[2] V. Round, S. Leinss, M. Huss, C. Haemmig, and I. Hajnsek, ‘Surge dynamics and lake outbursts of Kyagar Glacier, Karakoram’, The Cryosphere, vol. 11, no. 2, pp. 723–739, Mar. 2017, doi: 10.5194/tc-11-723-2017.
[3] S. Li, S. Leinss, P. Bernhard, and I. Hajnsek, ‘Recent Surge of the South Rimo Glacier, Karakoram: Dynamics Characterization Using SAR Data’, in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Jul. 2021, pp. 5520–5523. doi: 10.1109/IGARSS47720.2021.9553193.
[4] S. Li, S. Leinss, and I. Hajnsek, ‘Cross-Correlation Stacking for Robust Offset Tracking Using SAR Image Time-Series’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 14, pp. 4765–4778, 2021, doi: 10.1109/JSTARS.2021.3072240.
The rising temperature leads to an imbalance of mass exchange of mountain glaciers. This results in a rapid thinning and retreat observed in many mountainous regions of the World. One of the obvious consequences of the deglaciation of high mountains is appearance of glacial lakes. These lakes represent a serious hazard for downstream population. The lake outbursts can destroy settlements and infrastructure even tens of kilometres distant from the source lake. So far, a lot of attention has been paid to the outburst potential of lakes dammed by glacial moraines whereas supraglacial and glacier-marginal lakes were largely neglected. It was, however, shown that even relatively small lakes can represent a significant threat. In this study we focus on mapping of glacial lakes from optical and microwave satellite data.
Two study sites in Himalayas were selected to test the approach: Humla and Langtang Himal in Nepal. Both areas are influenced by the Indian Summer Monsoon. Glaciers draining to the south are largely covered by debris and feature many supraglacial lakes, while the glaciers draining towards the Tibetan Plateau often terminate into large pro-glacial lakes and have less debris cover.
The constellations of European satellites Sentinel-1 and Sentinel-2 provide a great opportunity to study hydrological features in high spatial and temporal detail even under cloudy conditions. The standard methods such as water indices applied on the optical data from Sentinel-2 result in a satisfactory lake mapping. Their use is however limited to the end of melting season by a large cloud cover during the monsoon.
The study areas are covered by Sentinel-1 acquisitions in Interferometric Wide (IW) swath mode with dual polarization (VH and VV). The revisit frequency of Sentinel-1 for the study region is given by the Constellation Observation Scenario as 12 days whereas the maximum is 6 days. We test the potential of various SAR products such as polarization difference and various derivatives of DEMs for a routine mapping of glacial lakes. Preliminary results for the study sites will be presented.
Deception Island is located in the South Shetland archipelago, between longitudes 60°29’20”W and 60045’10”W and latitudes 62053’30”S and 63001’20”S, in a region with a special sensitivity to the effects of climate change that has occurred in the last decades. This implies an increase in the surface of ice-free areas, and entails the development of paraglacial, geomorphological, hydrogeological and edaphic processes. Furthermore, the ice-free areas of this region constitute important biodiversity hotspots. On average, the proportion of the archipelago´s ice-free area is less than 10% of the total surface. The processes occurring in Deception Island`s ice-free areas also include the volcanic activity, with the most recent eruptions having been in 1967, 1969 and 1970. The use of remote sensing data has a great potential to characterize and monitor the increasing effects of climate change within ice free areas.
The objective of this work was to quantify the glacial and paraglacial changes that occurred in the last six decades in the southern sector of Deception Island. To quantify the changes that occurred in the selected ice-free area, aerial photography taken by the Falkland Islands Dependencies Aerial Survey Expedition (FIDASE) 1956–1957 and a Sentinel-2 satellite image obtained on the 30th of March 2017 with a correction level 1C were used. The latter image has a spatial resolution of 10 m for the visible and near infrared bands (NIR) bands and 20 m for further NIR and short wave infrared (SWIR) bands. Further material included the DEM obtained from digital coverages of the topographic map at a 1:25,000 scale generated by the Spanish Army Geographical Service and the University Autonoma of Madrid in 1994.
Three aerial images were georeferenced using Sentinel-2 and Landsat-8 satellite (23rdof March 2018) images as reference, and the glacier limits were carefully digitized. As these images were taken in December, at the start of the austral summer season, the estimated glacial area may be overestimated due to the snow cover. In this case, image enhancement and filters were used to differentiate the ice from the snow.
As surface ice and snow have a high reflectance compared to the surrounding bare soil and rock surfaces, we have calculated the Normalized Difference Snow Index (NDSI) with the Sentinel-2 green (560 nm) and the SWIR (1610 nm) bands, applying a threshold to extract ice and snow-covered areas. The NDSI index value of greater than 0.4 was treated as snow cover. A supervised classification using the Random Forest algorithm was applied using the different sensor and cartographic data to further differentiate the surface covers within the ice-free areas around the glacier front.
An initial result showed that the spectral reflectance of snow and ice is influenced by scoria ash and rock covering or within the ice front of the glacier, melting water on the glacier surface and changes of the ice composition with time. It was therefore necessary to confirm and refine the results obtained with the index. A solution was to use different combinations of bands (0.842, 0.665 and 0.560 µm) to determine the final extension of the glacier.
The retreat has led to a 10% increase in the extension of the ice-free area in the studied sector in 61 years, between 1956 and 2017. An initial validation has been carried out using field data, but a more extensive validation is needed using field determinations, especially in areas where glacial fronts are covered by debris and pyroclasts. Furthermore, the supervised classification was useful to characterize the newly formed ice-free area under the influence of paraglacial processes. Areas affected by fluvial erosion, accumulation of deposited material and an overall lack of vegetation are well differentiated to the more stable neighboring areas that have been ice-free over a longer time period.
Gascoin et al. (2019, 2020) have used MSI/S-2 data for the estimation of the fractional snow cover (FSC) at 20 m resolution in open terrain. However, climatic effects of snow cover depend not only on its extent but also on its spectral albedo, which plays an important role in the modification of the backscattered solar energy on local and global scales. Snow albedo products are currently available at moderate resolution (i. e., 300 m from Ocean and Land Colour Imager (OLCI) (Kokhanovsky et al., 2019), 500 m from MODerate Imaging Spectrometer (MODIS) (Schaaf et al., 2002) and Sea and Land Surface Temperature Radiometer (SLSTR) (Mei et al., 2021), and 1 km from Second Generation Global Imager (SGLI) (Chen et al., 2021)). However, they do not allow capturing the fine details of spatial variability of the snow surface properties. The technique presented here provides snow albedo products on the scale of 10–20 m and makes it possible to derive and validate subpixel snow cover products as obtained, e.g., from MODIS measurements. The retrieval approach is based on the asymptotic radiative transfer theory valid for weakly absorbing snow layers combined with the geometrical optics approximation for the local properties of snow layers composed of irregularly shaped ice crystals. The external mixture of ice crystals and various pollutants such as algae, dust, and soot is considered. It has been found that the bottom-of-atmosphere snow reflectance can be modelled using just two parameters – the reflectance of snow layer under assumption that there is no absorption processes in snow and effective absorption length, which determines the clean snow albedo. For polluted snow, one needs additional two parameters to describe the spectral behavior of the snow albedo. They are the absorption Angström coefficient and concentration of pollutants. We also propose the techniques for the determination of the snow specific area, ice grain size and snow broadband albedo using single – view spectral MSI/S-2 measurements.
References
Gascoin, S., Grizonnet, M., Bouchet, M., Salgues, G., and Hagolle, O. (2019). Theia snow collection: high-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data. Earth Syst. Sci. Data 11, 493–514, https://doi.org/10.5194/essd-11-493-2019.
Gascoin, S., Barrou Dumont, Z., Deschamps - Berger, C., Marti, F., Salgues, G., López-Moreno, J.I., Revuelto, J., Michon, T., Schattan, P., Hagolle, O. (2020). Estimating fractional snow cover in open terrain from Sentinel-2 using the Normalized Difference Snow Index. Remote Sens. 12, 2904, https://doi.org/10.3390/rs12182904.
Kokhanovsky, A., Lamare, M., Danne, O., et al.(2019). Retrieval of snow properties from the Sentinel-3 Ocean and Land Colour Instrument. Remote Sens. 11, 2280. https://doi.org/10.3390/rs11192280.
Schaaf, C. B., Gao, F., Strahler, A. H., et al. (2002). First operational BRDF, albedo nadir reflectance products from MODIS, Remote Sensing of Environment, 83, 1–2, 135-148.
Mei, L., Rozanov, V., Jäkel, E., Cheng, X., Vountas, M., and Burrows, J. P. (2021). The retrieval of snow properties from SLSTR Sentinel-3 – Part 2: Results and validation, The Cryosphere, 15, 2781–2802, https://doi.org/10.5194/tc-15-2781-2021.
Chen, N., Li, W., Fan, Y., et al. (2021). Snow parameter retrieval (SPR) algorithm for GCOM-C/SGLI, Rem. Sens. Env., in press.
Satellite observations are the only means for timely and complete observations of the global snow cover. A range of different satellite snow products is available, the performance of which is of vital interest for the global user community. We provide an overview on goals and activities of the SnowPEx+ initiative, dedicated to the intercomparison of northern hemispheric and global satellite snow products, derived from data of long-term operational as well as recently launched satellites. SnowPEx+ is the continuation of SnowPEx (2014-2017), carried out as an international collaborative effort under the umbrella of Global Cryosphere Watch / WMO and funded by ESA.
SnowPEx+ focuses on two parameters of the seasonal snowpack, the snow extent (SE) from medium resolution optical satellite data (Sentinel-3, VIIRS, MODIS, AVHRR, etc.) and the snow water equivalent (SWE) from passive microwave satellite data. Overall, 15 hemispheric and global SE products (binary and fractional SE) and two SWE products are participating in the experiment. For intercomparison, daily SE products are transformed to a common map projection and standardized SnowPEx protocols are applied, elaborated by the international snow product community. The SE product evaluation applies statistical measures for quantifying the agreement between the various products, including the analysis of spatial patterns. Validation of SE products uses as benchmark high resolution snow maps from about 150 globally distributed Landsat scenes acquired in different climate zones, under different solar illumination conditions and over various land cover types. This snow reference data set, based on various retrieval algorithms, is generated and evaluated by the SnowPEx+ High Resolution Snow Products Focus Group. In-situ snow data from several organisations in Europe, North America and Asia are also used for validating the satellite SE and SWE products. SWE products are also inter-compared with gridded snow products from land surface models driven by atmospheric reanalysis data. In addition, the multi-year trends of the various SE and SWE products are evaluated. We provide an overview on the snow products, discuss the validation and intercomparison protocols, and report on preliminary results from the intercomparison and validation of various snow products.
In terms of area, snow makes up the largest proportion of the cryosphere, but it is also the most short-lived with the greatest seasonality and variability. The use of remote sensing to detect snow has long been dependent on either passive microwave sensors or multispectral systems such as AVHRR, MODIS, or Landsat. While the former provide data on a daily basis and also allow insights into the snowpack (e.g. snow-water equivalent), their geometric resolution is insufficient for a closer look at the snowpack dynamics. Sensors such as Landsat offered a good geometric resolution, but the repetition rate was inadequate. The MODIS (Moderate-Resolution Imaging Spectroradiometer) sensor filled exactly this gap and has been providing data since 2000 on board the Terra satellite and since 2002 on board Aqua. For this period, the National Snow & Ice Data Center (NSIDC) offers daily snow cover as a level 3 product. The daily snow product MOD10A1 (Terra) or MYD10A1 (Aqua) has a nominal resolution of 500 m and is in sinusoidal projection. The detection of snow is based on the Normalized Difference Snow Index (NDSI), which makes use of the different reflection of snow in the visible spectral range (VIS) and the short-wave infrared (SWIR). Since snow reflects almost complete in the VIS, but almost none in the SWIR, the NDSI adapts a high value for snow cover. In addition, the normalized Difference Vegetation Index (NDVI) is used for snow under thick vegetation cover. The MODIS product now contains the values between 0-100 for NDSI (only positive values are assigned to land, multiplied by 100) and other values for different classes.
The daily MODIS snow information forms the data basis for the Global SnowPack (GSP) processor. There, data gaps (e.g. through clouds or polar night) are filled in four steps. First, the Terra and Aqua data are combined and then filled with the day before and after. In the next step, a digital elevation model is used to determine the height from which there are only snow pixels and those from which there are only snow-free pixels; all pixel heights below or above are filled accordingly. The last step is the seasonal filter, in which all remaining data gaps are filled by gradually going backwards in the time series. Based on these "days until cloudfree", the elevation of the pixel and the day of the year, individual accuracy estimates are made and supplied as an accuracy layer. Since the MODIS snow data is available with a delay of two days and an additional day is necessary for the application of the 3-day interpolation, the near real-time product of the GSP is available after three days and can be downloaded from the GeoService of the Earth Observation Center.
The reference period for the analysis of the snow cover is the hydrological year, which runs from the beginning of the meteorological autumn to the end of the meteorological summer. This period is further subdivided into an early and late snow season, the time of separation being mid-winter. For these seasons, the cumulative snow cover for each pixel is calculated and stored as early and late snow cover durations. The analysis of the variability of these snow cover durations for certain spatial areas (e.g. river catchment areas) enables trends and significant developments to be found. Recently, this data on snow cover has been coupled with hydrological models in order to better understand and predict extreme hydrological events. The Global SnowPack time series now comprises 21 hydrological years and the developments identified so far will be presented.
Seasonal snow cover (SSC) is the largest component of the cryosphere in its extent and a crucial variable in the hydrological cycle. To study its spatio-temporal variability at climate-relevant timescale is thus an eminent task. Our results are based on Advance Very High Resolution Radiometer (AVHRR) data, providing daily, global imagery at a spatial resolution of 5km from 1982 to 2020. This unique dataset, developed and processed in the frame of the ESA CCI+ Snow project, is exceptionally valuable to derive pixel-based SSC information at a great spatial and long temporal scale.
The Hindu Kush Himalaya (HKH), the worlds ‘water tower’, is the headwater area of Asia’s largest rivers. Due to the complex topography and great spatial extent the HKH is characterised by variable temperature and precipitations sources and patterns and thus exhibits large heterogeneity in the presence of SSC. The presence of SSC and possible related changes due to global atmospheric warming are of high importance for this region as more than two billion people within the mountain regions and downstream are dependent on freshwater stream flow partly to largely fed by SSC.
Here, we present the SSC phenology for the HKH region over the past four decades. We obtained various snow cover metrics (snow cover area percentage, snow cover duration, etc.) and their trends which are directly linked to climate change and thus of high relevance for seasonal water storage and mountain streamflow. Our assessment reveals strong SSC dynamics at all time scales and across the HKH region. We find a significant decline in snow cover area percentage during summer months (Theil-Sen slope: July = −0.028, August = −0.018) and a decreasing tendency form mid-spring to mid-fall indicating a shift in seasonality (Theil-Sen slope ≈−0.013). Moreover, we shed light on the complex interplay between air temperature, precipitation and SSC occurrence that particularly influence the presence or absence of significant long-term trends.
Thanks to the uniquely spatio-temporally resolved dataset, we can highlight the complex behaviour of SSC in the HKH region and emphasise that recently observed extreme events can be expected to reoccur. We stress the need to improve the representation of SSC in climate models to enhance our understanding of future SSC behaviour and its importance in the global Earth surface energy balance.
Monitoring the state of seasonal snow and its evolution in time is of great importance for many applications including meteorology, risk management, water resources, biodiversity and ecosystems, and climate change studies. These areas serve as frozen reservoirs that play an important role in the climate system by modifying energy and mass transfer between the atmosphere and the surface. The Sentinel-1/-2 satellites now allow the study of the snow cover at unprecedented spatial resolutions and revisit times (decametric resolution, 5-6 day revisit time). Sentinel-1 SAR (Synthetic Aperture Radar) images are acquired by active remote sensing in C-band and are used to derive information about wet snow whereas Sentinel-2 data are used to monitor snow extent, whether it is wet or dry.\\
The main objective of this work is to assimilate snow and wet snow products from Sentinel-1/Sentinel-2 satellites into the Crocus snow model in order to better constrain the model's snowpack simulations and to improve the spatial and temporal variability of snowpack over the French mountains. We rely on the ensemble assimilation chain implemented at Centre d'Etudes de la Neige and based on a particle filter applied to SURFEX/Crocus model snowpack simulations (at a resolution of 250 meters).
This work involves the assimilation of snow products derived from satellite observations using an appropriate observation operator. \\
Firstly, we will introduce intercomparison results between Crocus snowpack simulations and satellite products over a large alpine area of steep relief. Comparisons will take into account terrain characteristics such as aspect and orientation. Snow lines and snowmelt lines will be computed from satellite observations and from simulations and they will be inter-evaluated with additional in-situ observations. Then, some preliminary results of assimilation will be discussed with a focus on the complementarity of microwave snow products
with snow products from optical satellite. In particular, we will study the
situations for which an ambiguity can remain between a snow-free surface and dry snow. In
clear skies, the Sentinel-2 snow product will be used to isolate, among the pixels not
identified as wet snow, those associated with dry snow. An analysis of the performance of
the assimilation will be carried out according to topography, time and date of observation and also soil conditions (freeze/thaw).
Free and open access of multispectral satellite data from the Landsat and Sentinel series provide excellent opportunities for snow cover monitoring. Accurate snow cover estimations are essential for many fields of research applications, such as ecology, hydrology, water management and climate research. Snow cover area has been identified as an essential climate variable (ECV) by the World Meteorological Organization (WMO) and therefore demands precise monitoring. Within the frame of the AlpSnow project, ENVEO developed a new algorithm for estimating snow cover fraction from multispectral satellite data which can be applied from regional to global scales.
Various single- and multiband techniques for snow extent mapping exist. To date, snow cover estimations in complex alpine terrain lack adequate accuracy due to the impact of topography on illumination, including cast shadows. Snow cover estimations in those regions are often underestimated. The improved algorithm proposed in this presentation is based on a linear spectral mixing model. We provide a basis for statistically optimal fractional snow cover estimation along with options for error estimation. This is also to encourage quantification of model errors in snow maps in future works, which has so far commonly been neglected in the discussion. For retrieval initialisation we developed a robust pre-classification method for detecting fully snow-covered and bare pixels. The pre-classified pixels are used for an automatic and local selection of snow and snow-free endmembers. The main advantage of this procedure compared to endmember modelling and to endmember selection from a spectral library is that the differences in illumination and atmospheric conditions become insignificant and therefore need not to be accounted for. Moreover, snow and snow-free endmembers in cast shadows help to provide improved fractional snow cover estimations for those areas. Overall, this leads to smaller misfits within the linear spectral unmixing solutions. The linear spectral unmixing problem is solved iteratively for the N-nearest local snow and snow-free endmembers choosing the outcome with the lowest misfit. We present the framework and potential of our statistically optimal approach and show demonstration products for different regions in the Alps. First results of the performance assessment with other publicly available snow reference datasets will be shown.
A new method for Snow Cover identification from Webcam Images - Towards validating Sentinel-2 Snow Cover Products
J. Wachter1,2, P. Zellner1, T. Ullmann2, V. Premier1, C. Marin1, A. Jacob1, M.Rossi1
1Eurac Research, Institute for Earth Observation, 39100 Bolzano/Bozen, Italy
2University of Wuerzburg, Institute for Geography and Geology, 97070 Würzburg, Germany
Snow cover in mountainous areas is an important variable for a wide range of ecological and societal factors. Extensive monitoring of fractional snow cover (FSC) and snow related processes has been performed with active and passive satellite earth observation data sources. However, there are limitations to using satellite data, such as their temporal and spatial resolution or cloud coverage. At the same time reliable in-situ information is a key factor to validate snow cover products (Hu et al 2017). To enhance the temporal and spatial information on ground, openly accessible webcam data show great potential by continuously providing information in many locations. Additional challenges emerge from the use of this data source such as the different view angle compared to satellites, the location of the station or weather influences near the ground.
We present a novel approach to automatically and effectively derive snow cover from webcam images. We use an RGB image timeseries from two different webcams in the Italian Autonomous Province of South Tyrol covering the years of 2019 and 2020 respectively. First the webcam images are filtered to exclude unfavorable weather conditions. On the filtered timeseries three snow extraction models are compared: (1) state-of-the-art simple thresholding method (Salvatori, 2011), (2) K-means unsupervised clustering into bright and shadow areas followed by thresholding for each cluster to account for brightness differences and (3), a hybrid approach choosing the state-of-the-art method (1) in case of homogenous illumination (little shadows) and choosing the clustering method (2) in case of heterogenous illumination (many shadows). This is implemented in order to deal with classification errors due to pronounced illumination differences between shadowed and non-shadowed areas within an image (Salvatori et al 2011). The detected snow cover information can finally be compared to satellite-derived snow cover products. For this, ROIs have been delineated on the webcam image and manually matched to the corresponding Sentinel-2 pixels to gain a more detailed insight (Figure 1, 2). The validation of the workflow is done through visual inspection and classification into the presented classes of 10-15 example images per webcam.
FSC can be automatically detected over the course of one year for each image timestep (Figure 3). The accuracies from the confusion matrix of the model results compared to visual inspection range between 0.5 and 0.7 for the model (1), between 0.74 and 0.86 for the model (2) and between 0.8 and 0.98 for the model (3). Measurements from Sentinel-2 based snow cover products (Premier et al 2021) and the webcam measurements show a RMSE between 0.29 and 0.43 for the model (2) and between 0.34 and 0.41 for the model (3). The novel approach allows a more-in-depth analysis of relevant phases in mountain snow dynamics such as the melting in spring or new snow events. The approaches (2) and (3) have shown, with slight differences depending on the webcam, better performances when compared to the current state-of-the-art method, with an increased R² value of >=0.15.
References
Hu, Z.; Kuenzer, C.; Dietz, A.J.; Dech, S. The Potential of Earth Observation for the Analysis of Cold Region Land Surface Dynamics in Europe—A Review. Remote Sens. 2017, 9, 1067. https://doi.org/10.3390/rs9101067
Premier, V.; C. Marin, C.; Steger, S.; Notarnicola, C.; Bruzzone, L. (2021). A Novel Approach Based on a Hierarchical Multiresolution Analysis of Optical Time Series to Reconstruct the Daily High-Resolution Snow Cover Area. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, pp. 9223-9240. 10.1109/JSTARS.2021.3103585.
Salvatori, R.; Plini, P.; Giusto, M.; Valt, M.; Salzano, R.; Montagnoli, M.; Cagnati, A.; Crepaz, G.; Sigismondi, D. (2011). Snow cover monitoring with images from digital camera systems. Italian Journal of Remote Sensing. 43. 10.5721/ItJRS201143211.
Copernicus expansion missions include three major satellites operating in the microwave domain and dedicated to or of interest for monitoring the cryospheric surfaces. They operate in different modes, radar altimetry, Synthetic Aperture Radar and radiometers, in a wide range of frequencies. Despite the considerable knowledge accumulated over the last 4 decades of microwave observations and the numerous applications for the cryosphere, the new capabilities of these sensors make it difficult to assess their potential from existing observations only. The selection of optimal mission parameters and the conception of pre-flight retrieval algorithms can be informed by theoretical predictions and sensitivity analyses of the microwave signal to the characteristics of the surface.
Here we present the recent advances in the Snow Microwave Radiative Transfer model (SMRT) available to the community through github. The model initiated in 2015 in the framework of an ESA project to explore the role of snow microstructure on the microwave signature, has been extended in many directions since then. Thanks to community efforts, it is able to simulate not only a multi-layered snowpack but also sea-ice, frozen lake and blue ice, which makes it suitable for most cryospheric surfaces on Earth. This poster is about recent significant advances:
- an altimetric waveform simulator has been added to compute backscatter return which makes SMRT suitable for CRISTAL.
- a new microstructure formulation has been introduced to link snow measurable properties to electromagnetic properties with a fully-traceable physical chain. The main expected benefit is for CIMR and CRISTAL at intermediate and high frequencies (>19 GHz) when snow scattering is significant.
- the high-frequency scattering behavior has been consolidated with the strong contrast expansion theory to cover the full range of frequencies considered for the expansion missions, up to 89 GHz as on CIMR.
- the full range of snow, firn and bubbly ice density can now be treated in a consistent way, which is relevant on ice-sheets at low frequencies as for CIMR and ROSE-L.
- rough surface scattering is being improved with a new AIEM implementation to compute more realistic cross-polarization and surface-volume multiple scattering. The benefit is expected for low-frequency radar such as Sentinel 1 and ROSE-L.
- numerical stability, usability and performance are also continually improving.
To conclude, SMRT is now a mature and feature-rich open-source radiative transfer model for the cryosphere community. It allows consistent cross-sensor simulations to exploit the fleet of existing and future copernicus missions in synergy.
Seasonal snow can cover up to one fourth of the total land area during the northern hemispheric winter. Accordingly, snow is an important parameter in regional to global climate and hydrological models (IPCC, 2021). Depending on the research purpose, different user groups require either information on snow cover viewable from above or information on snow cover on the ground. In both cases, fractional snow cover is commonly preferred to binary information (snow_cci URD v3.0, 2021; Malnes et al., 2015).
Within the ESA SnowPEx project (2014-2016), global and hemispherical satellite snow products generated from different sensors using different approaches were intercompared and evaluated with in-situ data and reference snow maps from high-resolution optical satellite data. The validation results revealed that snow products generated by the SCAmod algorithm (Metsämäki et al., 2005; 2012; 2015) performed well in terms of classification quality compared to other products. In addition, SCAmod-based snow cover extent products provide snow cover fraction on ground using a canopy correction, while most other participating products map in forested areas the snow cover viewable on top of the forest canopy, often as binary snow information.
In the frame of the ESA Climate Change Initiative Extension (CCI+), snow was added as one of nine new Essential Climate Variables (ECV). The main objective of the snow_cci project is the generation of fully validated and homogenous time series of snow cover extent from optical satellite sensors (MODIS, AVHRR, SLSTR, A/ASTR-2) and snow water equivalent products from passive microwave satellite data and in-situ data. For the snow cover extent time series from MODIS, the processing chain was developed to generate daily global snow cover fraction (SCF) maps with a pixel spacing of about 1 km for the period 2000 – 2020. It consists of four main modules: (i) reading and geolocation of the Terra MODIS L1B radiance data, (ii) detection of clouds, (iii) a pre-classification of snow free areas, and (iv) the SCAmod-based estimation of fractional snow cover, separating in forested areas the snow cover fraction on ground (SCFG) and the snow cover fraction viewable on top of the forest canopy (SCFV). As auxiliary input, the SCAmod algorithm requires (1) reflectance information of snow free forest, non-forested ground, and wet snow, as well as (2) pixel-based information on the transmissivity of the forest canopy. The reflectance values for forest and ground were changed from one global value (Metsämäki et al. 2005; 2012; 2015) to global maps, as suggested by Salminen et al. (2018), estimating the particular reflectance per pixel from a 15-years time series of MODIS band 4 data (λ~550 nm). For densely forested regions, a method combining the observed time series statistics, spatial gap-filling, and iterative modelling of the forest and ground components of the reflectance was applied. For the reflectance of wet snow, the constant value was updated to 0.59, based on time series statistics from eight globally distributed study regions observed at full snow cover conditions during the melting season 2007. Finally, forest transmissivity was derived based on global maps of forest types (ESA Land cover CCI, 2017) and tree cover density (Hansen et al., 2013). A method linking the used auxiliary data ensures that both, the SCFV and the SCFG products, are consistent and are the same in non-forested areas. For each auxiliary data set, the associated uncertainty is estimated, contributing to the overall error of the products. The improvement of the auxiliary data and the retrieval method was significantly supported by feedback of the snow_cci validation team and the climate research group during the development cycle.
The resulting 20 years SCFV and SCFG climate research data packages from MODIS data provide daily global maps of fractional snow cover and the associated uncertainty per observed pixel. Minimum and maximum snow cover (outside of glaciers and icesheets) typically occur in August and February, respectively, with relatively large areas (~9% of the global land area) hidden by polar night conditions in December. The products will be available for download from the CEDA archive (https://archive.ceda.ac.uk/) from early 2022 onwards.
We will present details on the processing chain and an analysis of the time series of over 20 years and highlight results and aspects of the evaluation process using higher-resolution reference data. During snow_cci Phase 2 further development of the algorithm is planned, and a new product version is foreseen for 2023.
Seasonal snow is a key factor in the global water cycle and climate system. Snow is characterized by high spatial and temporal variability, both intra- and interannually, and changes in snow cover tend to amplify climate fluctuations. Up to 50 million km2 of the Northern Hemisphere is covered by terrestrial snow in winter. Therefore, satellite remote sensing provides the only possible means of deriving comprehensive information on snow cover. There are many different Earth Observation (EO)-based global/continental snow products available. Since these evidently show differences, it is of great interest to investigate how they deviate from each other. This information is necessary, e.g., when using these data in Earth models.
The ESA SnowPEx project was initiated in 2014. The aim was to evaluate different global/continental scale snow products featuring Snow Cover Extent (SCE) and Snow Water Equivalent (SWE) by means of intercomparison and validation against reference data. The reference data included in-situ measurements and snow maps from high-resolution satellite data. One goal was to develop a consistent protocol for the intercomparison and validation that could be repeated to any dataset taken along into investigation, also in the coming years. The project produced a detailed analysis on the performance of different snow products. In the subsequent project SnowPEx+, we continue these activities with new products and for more recent years.
Here we focus on validation against in-situ measurements for the new set of SCE products and for the years 2014-2020, following the earlier developed validation protocol. In-situ measurement on Snow Depth have been collected from archives maintained by Global Historical Climatology Network (GHCN), by WMO distributed by ECMWF and by All-Russian Research Institute of Hydrometeorological Information (RIHMI). Together these datasets cover quite well the seasonal or permanent snow-covered areas as well as areas with ephemeral snow.
There are two kinds of SCE products to be evaluated; (i) those providing viewable snow (i.e., snow retrieval relates to what is seen from above the canopy, including on-canopy snow) and (ii) those providing snow-on-ground, offering the snow coverage under the canopy. In forests, these two products are not directly comparable due to their different nature, which needs to be taken into account in the in-situ validation. Furthermore, some of the SCE product provide Snow Cover Fraction (SCF, %) within a product pixel while some products provide information on whether the pixel is snow-covered or snow-free (binary snow information). Since the in-situ features only snow depth, not SCF, it was necessary to develop a protocol that first converts any snow information into binary snow/non-snow data and then use these in the comparisons. With snow depth, a threshold of 2cm was found reasonable for judging if the ground is snow-covered or snow-free. For SCF products under evaluation, several thresholds for FSC-% were tested and a threshold of 15% was found the most useful. Naturally, binary snow products needed no conversion.
After binary conversion, a contingency matric between in-situ data and SCE-data is generated and generally applicable statistical measures are calculated. These include among others Recall, Precision, F-score and False Alarm Rate.
We present the validation results for each of the twelve products for different yearly seasons and separately for forested and non-forested area. There are a dozen Global/Northern Hemispheric products to be evaluated, provided as a courtesy of different Institutions or collaborative projects. The cell size of these varies between 1km and 24km. Due to the difference in product grids, specific care must be taken when interpreting the validation results. SnowPEx does not try to identify the ‘best’ product but seeks information on the potential areas or seasons that are challenging for different products, and on the other hand, that are successfully mapped. At the end, one of the outcomes is also information on the feasibility of the validation protocol.
Development of the dynamic snow densities for the GlobSnow snow water equivalent retrieval
Pinja Venäläinen, Kari Luojus, Juha Lemmetyinen, Jouni Pulliainen, Mikko Moisander, Matias Takala
Finnish Meteorological Institute, PO Box 503, FIN-00101 Helsinki, Finland.
Snow water equivalent (SWE) is an important property of the seasonal snow cover and estimates of SWE are required in many hydrological and climatological applications, such as climate model evaluation and forecasting freshwater availability. Traditionally, SWE has been measured manually at snow transects but a good alternative to in-situ measurements is to use spaceborne passive microwave observations, which can provide global coverage at daily timescales. The reliability and accuracy of the SWE estimates made using microwave radiometer data can be improved by assimilating radiometer observations with weather station snow depth observations. The ESA GlobSnow and succeeding projects have produced a family of daily SWE products utilize the assimilation approach. These climate data records span over 40 years.
We present an approach for implementing spatially and temporally varying snow density fields to further improve the accuracy of the GlobSnow SWE retrievals. We have created spatially and temporally varying snow density fields using manual snow transect measurements from Finland, Russia, Canada and the eastern USA and automated SNOTEL measurements for the western USA. Different versions of these dynamic densities were made using different interpolation methods and combinations of snow density data. Kriging interpolation and inverse distance weighted regression (IDWR) are two methods used to obtain density fields covering the whole northern hemisphere. Different snow density fields were made using either data from just one year or using data where snow density for each day of the year was taken as the mean calculated for the corresponding day over ten years. The dynamic snow densities can be used either to post-process existing SWE products or they can be implemented into the GlobSnow SWE production.
Both post-processing the GlobSnow product with dynamic snow densities and implementing the snow densities into the SWE retrieval produced similar improved results; overestimation of the small SWE values is reduced and underestimation of larger SWE values is improved. The root mean squared error and mean absolute error are both reduced by about 2 mm with usage of dynamic snow densities. Usage of the dynamic densities also sifts the peak snow mass to two weeks later than in the original GlobSnow SWE product. This sifted peak snow mass is more in line with other snow mass estimates. However, both post-processing and implementing the snow densities into the retrieval reduce the peak snow mass. Reduction in total snow mass was larger with post-processing than with implementing densities into the retrieval.
Luojus, K., Pulliainen, J., Takala, M., Lemmetyinen, J., Mortimer, C., Derksen, C., Mudryk, L., Moisander, M., Venäläinen, P., Hiltunen, M., Ikonen, J., Smolander, T., Cohen, J., Salminen, M., Veijola, K., and Norberg, J.: GlobSnow v3.0 Northern Hemisphere snow water equivalent dataset. Scientific Data, 8:163, https://doi.org/10.1038/s41597-021-00939-2, 2021.
Pulliainen, J., Luojus, K., Derksen, C., Mudryk, L., Lemmetyinen, J., Salminen, M., Ikonen, J., Takala, M., Cohen, J., Smolander, T., and Norberg, J.: Patterns and trends of Northern Hemisphere snow mass from 1980 to 2018, Nature, 581: 294–298, https://doi.org/10.1038/s41586-020-2258-0, 2020.
Takala, M., Luojus, K., Pulliainen, J., Derksen, C., Lemmetyinen, J., Kärnä, J.-P., and Koskinen, J.: Estimating northern hemisphere snow water equivalent for climate research through assimilation of space-borne radiometer data and ground-based measurements, Remote Sens. Environ., 115, 3517–3529, https://doi.org/10.1016/j.rse.2011.08.014, 2011.
Venäläinen, P., Luojus, K., Lemmetyinen, J., Pulliainen, J., Moisander, M., and Takala, M.: Impact of dynamic snow density on Globsnow snow water equivalent retrieval accuracy. The Cryosphere, 15: 2969–2981, https://doi.org/10.5194/tc-15-2969-2021, 2021.
Webcam image processing for obtaining environmental data has emerged in latest years. Digital imagery from environmental camera networks, which are mostly available for free, can be used in such processing techniques. FMIPROT Camera Network Portal is a platform that brings data and metadata from multiple camera networks and uses them to provide near real-time observations for different parameters, e.g., vegetation indices, snow cover and snow depth. Snow cover is an input parameter for weather forecasts, climatology and hydrology; therefore, it is essential for assessing natural hazards such as avalanches or floods and managing associated risks.
Global Observing System for Climate (GCOS) has specified the snow cover as part of the 50 essential climate variables (ECVs) to be observed by satellite remote sensing. Copernicus is providing multiple satellite-derived snow cover products through Copernicus Land Monitoring Service. Our latest studies showed that snow cover data derived from webcam imagery can be used in the validation of satellite-derived products.
We have used the FMIPROT Camera Network Portal as a platform to intercompare the produced fractional snow cover data with the Copernicus snow cover products. Our aim is to assess the feasibility, benefits and drawbacks of using webcam imagery collected from different sources for near real-time validation of Copernicus products. This study is implemented in the ESA IDEAS+ project, as a part of the QA4EO framework.
First, a simple HTTP API is developed to be able to fetch the webcam derived data from the platform, according to the temporal and spatial coverage. This API is hosted in the portal, which is also available for free. Then, the API is used to visualize the latest observations from the webcam imagery on interactive maps in the portal webpage. Later, processing chains are established to download Copernicus products in the server and compare the observations using the API. The comparison chains are run on the server regularly, every day for the last 7 or 21 days of data and every month for the last 1 year of data. The results are stored on the server. RMSE values and data pairs are visualized on interactive maps and in scatter plots respectively.
The coverage of the webcam data is increased in the operational monitoring processing chains by adding more camera data. Now, data from 15 cameras in Finland from the MONIMET camera network, FMI weather stations and Finnish airports and 2 cameras in the Alps from the PhenoCam Network are included in the comparisons. The satellite products used in the comparisons are Copernicus Snow Cover Extent Europe (500m), Copernicus Snow Cover Extent Northern Hemisphere (1km) and Copernicus High-Resolution Snow Cover Pan European (20m). Our efforts continue to include more webcam and satellite data in the comparison processing chains, to implement two-dimensional location metadata in the processing software to include webcam data in complex terrain and to study different methods in the retrieval of the snow cover from webcam imagery.
Mountain snow plays a key role in the climate system due to its ability to control energy and mass transfers between the atmosphere and the surface. Snow is sensitive to various factors, including the influence of weather, and monitoring its spatial and temporal variability remains a scientific challenge.
The Sentinel-1 satellites, operated by the European Space Agency, provide a means of monitoring and studying the snow cover with a spatial resolution and revisit time that are highly suitable for mountainous areas. SAR (Synthetic Aperture Radar) images from Sentinel-1 are acquired in C-band and provide amplitude and phase values for each pixel.
The main objective of this study is to monitor wet snow conditions from Sentinel-1, to examine its variation over time by cross-checking wet snow with independent snow and weather estimates, and to study its distribution taking into account terrain characteristics such as elevation,orientation and slope. Our goal is to derive useful representations of daily or seasonal snow changes that would help to easily identify wet snow elevations and determine melt-out days in an area of interest.
To derive snowmelt lines, we use Sentinel-1 data from ascending (late afternoon) and descending (early morning) orbits resulting in a continuous time series from August 2016 to end of July 2021. We also use several segmentation techniques to derive as many wet snow maps (using a fixed threshold, adaptative thresholds, dynamical thresholding, advanced learning techniques, ...). We rely on the CNES French facilities to perform pre-processing of SAR images using the Orfeo ToolBox software and the S1Tiling code. Snowpack simulations from the Crocus snow model will also be used. The Crocus model simulates the evolution of the physical properties of the snowpack, its detailed stratigraphy and the underlying soil properties. Within the model chain, the French mountains are represented in a conceptual way as massifs (23 massifs in the Alps, 10 massifs in the Pyrennes and 2 massifs for Corsica).
We will present the aggregated synthtic snowmelt lines we obtained with different segmentation methods over a selection of massifs and discuss some evaluation issues of our estimates, including a discussion of relevant mathematical metrics, the use of in situ measurements or independent satellite observations and the use of soil/snowpack and meteorological reanalysis. Emphasis will also be placed on the potential use of these products by avalanche forecasters to improve snow monitoring and avalanche forecasts.
The past decade brought significant advances in estimating the rate of mass loss of the Greenland ice sheet. In this regard, the skills of recent space-borne sensors to provide data at high temporal resolution to monitor surface melting are fundamental to address the processes driving the recent trends and mass loss. Sentinel-1 C-band Synthetic-Aperture Radar (SAR) images are able to quantify the spatial variability of melt at the spatial resolution of tens of meters, much higher than passive microwave observations and model output. SAR melt detection methods typically use a simple threshold for backscatter decrease compared to the average of dry snow. In this study, we use Sentinel-1 SAR imagery and Generative Adversarial Networks (GANs) and eXtreme Gradient Boosting (XGBoost), along with the outputs of a regional climate model MAR to generate enhanced resolution (10 m) surface melting maps over Greenland. We, lastly, report the results of the performance of the outputs of the machine learning tool against in situ measurements collected at the PROMICE automatic weather stations distributed over the Greenland ice sheet.
Seasonal snow is an important component of the global climate system. Mountain regions are especially sensitive to changes in snow cover extent, and observing snow cover variability is crucial for studying hydrological, ecological, and socioeconomic processes. Optical satellite data is an essential source for observing snow cover variability from regional to global scales. However, its temporal resolution is highly affected by cloud coverage. Public webcams have the major advantage of monitoring and detecting snow cover information below the cloud coverage. In addition, they offer a great potential to combine local-scale snow cover information at a high spatiotemporal resolution and an increased areal coverage due to their high availability.
Here, we present a study using public webcams to derive the regional snow line elevation in an alpine catchment area and compare our results to regional snow line information derived from Moderate Resolution Imaging Spectroradiometer (MODIS) and Sentinel-2 snow cover products. We demonstrate the superior temporal resolution of webcam-based snow line retrieval by presenting a continuous time series with daily snow line elevation data between October 2017 and end of June 2018. Thanks to our hourly webcam data archive, our time-series contains only one day of missing data caused by excessive cloud coverage. Even though the area observed by the selected webcams covers only a small fraction of the whole catchment area covered by the satellite data sets, the regional snow line elevations derived from webcams are in good agreement with snow lines derived from the different satellite-based snow cover products. Webcam snow lines lie on average 53.1 m below snow lines derived from the Sentinel-2 fractional snow cover product provided by the Copernicus Land Monitoring service (CLMS). In addition, webcam-based snow line retrieval is on average 55.8 m below (33.7 m above) MODIS snow lines using a NDSI of 0.4 (0.1) using version 6 of the MODIS data sets MOD10A1 (Terra) and MYD10A1 (Aqua) provided by the NASA Distributed Active Archive Center at the National Snow and Ice Data Center (NSIDC) indicating the importance of the selected NDSI thresholds. We present the main reasons for the observed discrepancies and highlight the effectiveness of webcam-based snow line elevation retrieval in filling temporal gaps in satellite-based snow observations. Finally, we demonstrate that webcam-based snow cover information is not only a powerful data source to improve and complement satellite-based snow line retrieval, but can be used to visualize and evaluate differences in snow line elevation and snow cover estimated from different data sources as well.
Snow in Norway is an important parameter for the hydropower sector, a hazard source through avalanches, but also plays an important role for tourism. Due to climate change and associated warming, monitoring of changes in snow coverage and snow depth are crucial.
As part of the ESA project “Retrieval of snow depth from Sentinel-1 data” we will evaluate the snow depth retrieval algorithm suggested by Lievens et al., 2019 for Norwegian conditions. This approach suggests that snow depth is retrieved by the ratio between co-polarized and cross-polarized C-band backscatter from snow covered regions using the Sentinel-1 IW mode. In addition, Terra MODIS data are used to assess if the ground is snow covered. The method applies all available Sentinel-1 tracks over the study areas. In northern Norway, this often yields 6-8 tracks per 6 day repeat cycle, so the temporal sampling is relatively dense. In a later phase of the project, we will also evaluate Sentinel-1 EW mode data over Svalbard with daily acquisitions but using a different polarization ratio (HH/HV). The project aims to assess the accuracy of the algorithm and evaluate the potential to operationalize the method in a Norwegian near real time snow service which can provide valuable data to Norwegian hydrological and meteorological users. We also want to test out different spatial resolutions (e.g., 100m, 500m and 1000m) and we aim at testing the method in different land cover types (mountains, bogs, forested areas and glaciers).
For evaluation of the derived Sentinel-1 snow depth product, we use datasets available from meteorological stations and field campaigns. In addition to these sparse datasets, we will also use hydrological models and land surface models to assess the accuracy. Moreover, we will derive snow depth estimates from the NASA Ice, Cloud, and land Elevation Satellite 2 (ICEsat-2) during cloud free transects. ICEsat-2 carries a laser altimeter system, capable of measuring topographic changes due to snow accumulation. Our evaluation approach allows us to compare the Sentinel-1 snow depth product on different spatial scales and in different regions, to investigate if the snow depth retrieved from Sentinel-1 has the capability to capture the spatial and temporal snow depth variabilities.
Here we present first results of the evaluation of the Sentinel-1 snow depth product over Norway, using a set of in-situ data, and a comparison with snow depth derived from NASA satellite ICESat-2.
Since the launch of Sentinel-1 satellites, mountainous areas can be regularly observed by Synthetic Aperture Radar (SAR) data thanks to the all-day/all-weather potential of SAR imagery. The 6-day repeat pass acquisitions over Europe create image time-series where the temporal evolution of the radar backscatter can be observed in C-band at about 10-meter resolution. Higher resolution images can be obtained in X-band from commercial satellites such as TerraSAR-X and PAZ. Thanks to scientific projects, specific areas also benefit from quite regular acquisitions. Both sources of multitemporal SAR images are particularly interesting to monitor snow and ice covered regions and to analyze their evolution on different specific areas such as:
• Ice aprons (IA) which are small ice bodies of irregular shape present on steep slopes and complex topographies (Guillet and Ravanel, 2020),
• Valley glaciers with their accumulation areas where snow transform into firn and ice, and their ablation areas with bear ice after snow melt,
• Ice-free areas where dry then wet snow creates scattering changes which can be used to map snow covered areas (Karbou et al., 2021).
In this communication, we will present the analysis of 2 years of Sentinel-1 regular acquisitions (2020-now) and almost 2 years of PAZ acquisitions (about 20 images per year) over the Mont-Blanc Massif (MBM) in the Western part of the Alps. This analysis is performed over a set of 19 IA at various elevation and orientations, over several well-known glaciers as Mer-de-Glace and Argentière glaciers and on ice-free areas at equivalent elevations.
We first analyzed the evolution and temporal correlation of the two characteristics :
• The backscattering coefficient in X and C bands, after calibration using a high resolution DEM and local averaging in regions of interest to reduce the speckle effect,
• The coefficient of variation (CV) defined as the standard deviation over mean, a conventional statistical feature for SAR images which reveals the presence of heterogeneity within the estimation window.
In a second step we added the use of meteorological data (temperature, precipitation, wind, ...) to perform a statistical analysis and deduce the links between the physical parameters of the selected areas and their class.
As expected, the evolution of the backscattering coefficient of the snow is mainly related to the meteorological conditions, with a strong decrease when snow becomes wet. The behavior is similar in X and C band, but differences in the attenuation can be observed in the the different areas. More differences are observed on ice-aprons which also present an increase of the CV in the summer season probably related to the presence of surface heterogeneities which were not expected to appears on these ice-bodies. The differences observed on the temporal profiles are useful source of information to improve our knowledge on the ice/snow response in SAR imagery and to train machine learning algorithms in order to perform automatic classification of the different type of ice/snow covered areas.
References :
Guillet G. and Ravanel L. 2020. Variations in Surface Area of Six Ice Aprons in the Mont-Blanc Massif since the Little Ice Age. Journal of Glaciology 66 (259): 777–89. https://doi.org/10.1017/jog.2020.46.
Karbou, F. et al. 2021. Monitoring Wet Snow Over an Alpine Region Using Sentinel-1 Observations. Remote Sensing 13, 381.
The most recent evolution of PlanetScope SuperDove data products has been deployed since late 2019. In comparison to the previous generation of PlanetScope Dove data, which includes blue, green, red and near infrared (NIR) bands, the SuperDove data provide information from an additional four spectral bands, i.e. coastal-blue, green I, yellow, and red-edge. As more SuperDove satellites join the existing PlanetScope constellation, daily SuperDove image acquisition is likely to become achievable in the near future . However, since the SuperDove data are a collection of images from different sensors in a cubesat constellation, there is a challenge to radiometrically calibrate the PlanetScope products across sensors to achieve comparable results to other satellite systems. Although there is data fusion technique to use other satellite systems to calibrate the PlanetScope Dove surface reflectance product to the range of commonly-used satellite systems such as LandSat and Sentinel, such technique is hard to be applied to those unique bands that only exist in SuperDove. Therefore, it is necessary to assess the radiometric quality before application to real world problems. Here, we assess the data quality of the SuperDove surface reflectance product as well as the potential benefits of the additional spectral information for tree crop classification using a one-year time series of SuperDove satellite data. Preliminary results show that 90% of the SuperDove data time-series had less than 5% spectral variation of selected pseudo-invariant features. The SuperDove data tend to overestimate the surface reflectance when assessed against in-situ spectroradiometer measurements, with over-estimation being largest in the yellow, red, red-edge, and NIR bands. When employing the one-year SuperDove data time-series for tree crop classification using a convolutional long short-term memory recurrent neural network, the extra bands improved the likelihood of correct classification by as much as 7% for specific tree crops compared to the traditional 4-band products. However, some plantations are more difficult to classify than others, possibly due to insignificant seasonal patterns or the small amount of training data . In these cases, unsupervised classification might improve the classification of those plantations where ground truth data are missing.
Commercial “New Space” players can play an important role in the Earth Observation (EO) international strategy. Some of these new missions are potential candidates for the ESA’s Earthnet Programme Third Party Missions (TPMs), the ESA framework for integrating non-ESA missions, i.e. TPMs, into the overall ESA EO strategy. In this context, ESA have set up a project to assess the quality, suitability and usability of these missions and also to establish dialogues with the various mission providers in order to improve the overall coherence of the EO system. This project is known as the Earthnet Data Assessment Pilot (EDAP) [1]. The EDAP project has defined a set of guidelines that define a framework for the quality assessments it performs, aligned with the principles of QA4EO [2], and is also defining guidelines on data usability assessment. The EDAP Quality Assessment framework is designed to provide a thorough review of the most important aspects of mission quality, delivering the gathered information in a Quality Assurance Report (QAR), also summarized in a color-coded Cal/Val Maturity Matrix. The Cal/Val Maturity Matrix is divided into two main sections: Documentation Review and Validation Summary. These sections are themselves divided into sub-sections, which constitute each of the different aspects of the data product that should be assessed and graded, either as Basic, Good, Excellent or Ideal. Moreover, the Validation Summary is further detailed in the Detailed Validation Cal/Val Maturity Matrix, breaking down the validation methodologies used and results for particular performance metrics. Given the variety of sensors, some sub-sections are handled separately; moreover, a clear distinction is made for Optical and Synthetic Aperture Radar (SAR) sensors, for which separate guidelines have been released. In parallel with the product quality evaluation, the EDAP project is working towards the establishment of a new robust data usability assessment framework, whose guidelines are continually revised and improved. The usability assessment focuses on evaluating data usability against a set of applications from different domains, with the aim to provide users with an application-oriented “fitness for purpose” summary for the specific mission. Eventually, the usability assessment delivers a color-coded, easily readable, data usability matrix, analogously to the quality assessment case. This work will present the EDAP framework and the on-going development of data usability assessment guidelines.
Index Terms— Earthnet, Quality Control, New Space, Usability, Quality, Calibration, Validation
REFERENCES
[1] R. Mannan, K. Halsall, C. Albinet, G. Ottavianelli, P. Goryl, V. Boccia, A. Melchiorre, A. Piro, D. Giudici, N. Fox, S. Hunt, S. Saunier, “ESA's Earthnet Data Assessment Pilot: paving the way for new space players”, Sensors, Systems, and Next-Generation Satellites XXIII Conference, October 2019.
DOI: 10.1117/12.2532818
[2] QA4EO, “Quality Assurance for Earth Observation”, 2019, http://qa4eo.org (2019)
Topographic data are an important source of information for the processing of Earth Observation (EO) products and, thus, for developing reliable EO-based services. Within this context, the Copernicus Programme made the important effort of making available a high-quality elevation dataset that can be used as harmonised elevation reference for downstream applications: i.e., the Copernicus Digital Elevation Model (DEM), also known as CopDEM dataset. The latter is a digital surface model (DSM) obtained from the X-band, SAR-derived WorldDEM dataset. To produce the CopDEM, the WorldDEM data underwent thorough an editing and quality assurance process that ensured the dataset homogeneity [1], [2]. Although the WorldDEM is a proprietary product that has a spatial resolution of 0.4" (ca. 12 m) and a global coverage, the derived CopDEM dataset is distributed via products that have been processed and resampled to have different pixel sizes (i.e., ca. 10 m, ca. 12 m, ca. 30 m and ca. 90 m), different spatial coverages (i.e., global or pan-European) and different licenses. These different formats of the CopDEM dataset products are defined instances. For example, the 10- and 12-meters products (both referred as EEA-10 instance and hereinafter simply called EEA-10_INSPIRE and EEA-10_DGED, respectively) are available over the 39 European States (EEA-39). The exploitation of EEA-10 data is restricted to eligible users. The EEA-10_DGED instance is the one more similar to the original WorldDEM product. The 30-meters instance (named GLO-30) is currently available worldwide with a free license, except for few countries. The exploitation of the present GLO-30 non-public countries data is restricted to eligible entities. The 90-meters instance (defined GLO-90) has a global coverage and it is available worldwide with a free license (i.e., free, full and open data policy). More details about the CopDEM dataset, including the editing processing used for generating the different instances, can be found in [2].
The CopDEM dataset is expected to provide a common elevation layer for all Copernicus projects including Core Services and Copernicus Contributing Missions (CCMs). With this regard, this study aims at understanding the impact of the different CopDEM instances on the orthorectification of Very High Resolution (VHR) optical data (i.e., on the geolocation accuracy of the orthorectified products). To this aim, 30 VHR images acquired by diverse CCMs and distributed by the Copernicus Programme as Level 1 - L1 (also known as System Corrected or Ortho Ready Standard products) within VHR_IMAGE_2018 dataset [3] were selected for performing the analysis of this research. It is recalled that the Copernicus VHR_IMAGE datasets provide, every three years, one cloud-free VHR optical coverage of the EEA-39 area during a predefined temporal window: i.e., the vegetation season. The sensors included in the production of these datasets retain a ground sampling distance (GSD) ranging between 2 and 4 m. According to the native GSD of the sensor used for acquiring the data, some VHR_IMAGE products were distributed with a pixel size of 2 m and other with a pixel size of 4 m. These products were also processed to be spectrally, radiometrically and geometrically consistent. VHR_IMAGE data are distributed to the users as L1 and L3 (i.e., orthorectified) products [3]. VHR_IMAGE_2018 L3 products were orthorectified by using the VHR_IMAGE_2018 Digital Elevation Model (DEM) dataset [3], therefore they are not suitable for the purposes of this research. VHR_IMAGE_2018 L1 products, instead, were not orthorectified and they are distributed with the Rational Polynomial Coefficients (RPCs). It is recalled that optical images can be orthorectified by using the RPCs, a DEM (i.e., the CopDEM) and, eventually, a list of Ground Control Points (GCPs). The latter are expected to improve the quality of the orthorectification process. However, accurate GCPs are currently not (freely) available across the EEA-39 area. Therefore, only the RPCs and the CopDEM were used for orthorectifying the data during this experiment. However, for the objective of this work, this fact is not considered a problem because the interest of the analysis is on the relative evaluation of the impact of the different pixel sizes of the CopDEM instance on the orthorectification process and not on the absolute values of products’ geolocation accuracy that were obtained.
As previously described, for performing the experiment, 30 VHR L1 products were used. They were sampled from the whole VHR_IMAGE_2018 dataset by considering the following criteria:
- Spatial Resolution: 15 images distributed with a pixel size of 2 m and 15 images distributed with a pixel size of 4 m.
- Orography: 10 images acquired over flat areas, 10 images acquired over hilly areas and 10 images acquired over mountainous areas the distinction between (the different orographic classes was based on the VHR_IMAGE_2018 dataset specifications [3]).
- Land Cover: all the images were acquired over areas characterized by an heterogenous landscape mostly composed by the following land cover categories (of the Corine Land Cover classification): “Artificial surfaces”, “Agricultural areas” and “Natural and seminatural areas”.
All the 30 VHR L1 images were orthorectified, by means of a commercial software (i.e., ENVI), using the different CopDEM instances (i.e., EEA-10_INSPIRE, EEA-10_DGED, GLO-30, GLO-90) and the RPCs information. The processing outputs of the orthorectification processes will be hereinafter referred as L3_CopDEM products. Consequently, for every L1 image, 4 different L3_CopDEM products were obtained. Subsequently, a commercial software (i.e., ENVI), was used for identifying the coordinates of the same features on every L3_CopDEM image and a corresponding reference layer (RL): i.e., the so-called check points (or tie points). RLs are intended as freely available VHR orthophotos (spatial resolution ≤ 0.25 m) acquired by airborne sensors and (freely) distributed by national authorities. Therefore, the position (i.e., the coordinates) of the RLs’ check points is assumed to be the “truth” with respect to the position of the same check points present in the L3_CopDEM images. In order to perform the check points’ identification, the RLs were initially resampled to match the pixel size of the corresponding VHR image (i.e., 2 or 4 m) and projected to the same reference system (i.e., EPSG: 3035). The coordinates of all the check points pairs found in a every L3_CopDEM image were then used for computing the corresponding spatial shifts in X and Y directions. These shifts were then used for estimating the geolocation accuracy by computing the planimetric Root Mean Squared Error (RMSE) (also known as 2D-RMSE [4]) of every image. Finally, the RMSE values of all the L3_CopDEM images orthorectified by using the same CopDEM instance were averaged and compared.
Findings showed that the geolocation accuracy of the VHR L3_CopDEM images is influenced by the pixel size of the CopDEM product used for performing the analysis: i.e., the finer the pixel size of the CopDEM instance, the better the geolocation accuracy of the orthorectified VHR data. However, since the different CopDEM products are derived from a resampling of the original WorldDEM dataset, results are similar across the different CopDEM instances. Indeed, it was found that the difference between the average RMSE values associated to VHR products orthorectified with the CopDEM EEA-10 instance (finer pixel size) and the CopDEM GLO-90 instance (coarser pixel size) is < 0.5 m. Considering that the GLO-30 and GLO-90 instances are freely and (almost) globally available via the Copernicus Programme, this result is important. Indeed, it suggests that - for the orthorectification of VHR optical data - these CopDEM instances provide valuable topographic information that allows to reach a geolocation accuracy similar to the one that can be obtained by using the EEA-10 instance (i.e., the one more similar to the native spatial resolution of the WorldDEM dataset). Further research activities will be carried out to also understand the impact of other factors in the quality of the orthorectification process of optical VHR data (e.g., spatial resolution/pixel size of L1 products, orography, land cover, etc.), as well as the impact of using different orthorectification and check points identification algorithms.
References:
[1] Cenci L., M. Galli, G. Palumbo, L. Sapia, C. Santella and C. Albinet, "Describing the Quality Assessment Workflow Designed for DEM Products Distributed Via the Copernicus Programme. Case Study: The Absolute Vertical Accuracy of the Copernicus DEM Dataset in Spain," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 6143-6146, doi: 10.1109/IGARSS47720.2021.9554393
[2] Copernicus, 2021: https://spacedata.copernicus.eu/web/cscda/dataset-details?articleId=394198 – Accessed: 19/11/2021
[3] ESA, Copernicus Space Component Data Access Portfolio: Data Warehouse 2014 – 2020, Issue: 2.8, Reference: COPE-PMAN-EOPG-TN-15-0004, 2021. Available at: https://spacedata.copernicus.eu/documents/20126/0/DAP+Release+phase2+V2_8.pdf/82297817-2b96-d3de-c397-776292336434?t=1633508426589
[4] Ross K., Geopositional Statistical Methods, Proceedings of the 2004 High Spatial Resolution Commercial Imagery Workshop, 2004. Available at: https://ntrs.nasa.gov/citations/20080021615
The Earth observation (EO) market which has been driven by the era of smallsat development is expected to have 1,800 smallsats with the majority being < 50 kg in the next decade. Future EO system is all about getting smaller, more compact with Very High Resolution (VHR) sensors at accessible cost, while providing exceptional quality data more frequently.
This paper will focus on introducing the new generation of a VHR microsatellite constellation developed by Chang Guang Satellite Technology Ltd. of China (CGSTL) and commercialized globally by HEAD Aerospace as the strategic master distribution of CGSTL.
This constellation forms part of a larger constellation of the Jilin-1 Constellation which comprises 33 on-orbit Jilin-1 (JL) satellites with 31 satellites offering Very high Resolution (VHR) submeter resolution optical imagery. The JL constellation will further be expanded with a confirmed satellite launch schedule of 35 satellites planned in 2022. The full JL constellation is expected to have between 130 to 150 satellites by 2023, targeting to provide every 15 minutes revisit at global scale.
These satellites offer State-of-the-art technology in a compact and cost-effective satellite, that forms part of an established and growing larger constellation. Established systems for data consistency and quality control are in place and imagery is supplied in interoperable formats that can easily be consumed into most systems and used in conjunction with imagery supplied by other vector and raster datasets. The most important component to ensuring the quality of the output data is the input data. This is exactly the reason we promote the quality of the Imagery collected by their sensors, their depth of experience they have in this field as well as the quality, consistency and continuity customers need.
Currently, the VHR Microsatellite constellation known as DailyVision@1m (JL03A, 03B & 03D) is composed of ten on-orbit satellites providing daily revisit globally at < 1m (0.7m & 0.98) resolution. The constellation will be expanded: 35 JL satellites with confirmed launch schedule in 2021 and the full constellation with 138 satellites in 2023, offering global daily revisit of every 14 minutes at 1m resolution from 9:00 am to 17:00 pm.
This DailyVision constellation is the first < 1m microsatellite and the only one in the market using linear push-boom sensor instead of frame sensors, offering a wide swath at 18km. The satellite has long strip continuous imaging capacity while traditional satellite imaging processing method is still applicable. This future EO constellation introduces technical improvements in optical sensor, propulsion system, deployable solar panels and phase array antenna (enabling Imaging and downloading in parallel) in a compact 45 kg State-Of-Art satellite.
The Jilin-1 includes a variety of technologically advanced sensors that can be used together or individually:
- EarthScanner (JL-1 KF01/JL-1KF01B) 0.5m GSD with 136km swath (each satellite) capable of continuous imaging 4 200km for large-area coverage
- JL Stereo (JL-1 GF02A/B) 0.75m GSD with 40km swath agile satellites for stereo imaging, large area mapping and monitoring
- HyperScan (JL-1 GP01/02) 5m GSD hyperspectral imagery at 110km swath for natural resource, agriculture monitoring
- NightVision & Video Constallation (JL-1 SP03/07/08 & SP04/05/06): submeter video and night imaging
- DailyVision (JL-1 - GF-3):– 10 Satellites and growing. Early First pass at 9:20 orbit. Be the first to see what is happing
- JL-1 Constellation in 2021: 56 satellites deployed in 8 orbits with 40 minutes interval for 10 minutes sub-meter targeting capability from 9:20 to 13:50
This new generation of small EO satellites allow a low-cost access to space making EO missions attainable to non-governmental organizations as well as traditional users who wish to have their own satellite capabilities that can leverage capacity of the constellation as a whole.
The satellite manufacturing price is kept at an accessible level given its 45 kg mass. The fact that the satellite is very light, makes it suitable to launch up to 9 satellites using one small launcher at 500kg capacity to Lower Earth Orbit (LEO). This business model offers a cost-effective solution to operate a satellite constellation. The very compact satellite is 10 times lighter in weight compared to satellites with similar performance.
In the case of a user operating only three of the Microsatellites in 120° phasing, this constellation will be able to target anywhere on the Earth once per day at a maximum off-pointing angle of ~35°.
This satellite constellation is a proven technology based on mature and flight-proven with current 10 on-orbit satellites.
For over forty years, the European Space Agency (ESA) Earthnet Programme has played an important role in providing the framework used to integrate non-ESA Earth Observation (EO) missions (i.e. Third Party Missions (TPM)) into the ESA EO TPM portfolio, complementing the ESA-owned EO missions portfolio, as part of the overall ESA EO strategy. European users are given easy and open access to the mission data contained in these mission portfolios.
In line with the latter objective of the programme, ESA continues to foster cooperation and collaboration with not only other national space agencies but also commercial mission providers from all over the world. These new “space players” are now playing an important role in the EO international strategy, whose evolution is driven by the anthropogenic challenges this planet is now facing. Some of these new space player missions can be considered as potential Earthnet TPM and so in order to assess this potential, ESA established the Earthnet Data Assessment Pilot (EDAP) project.
The EDAP project uses the guidelines set by the EDAP EO Mission Data Quality Assessment Framework, whose core is known as the Maturity Matrix (developed by National Physical Laboratory (UK)), to perform preliminary assessments on the suitability and quality of mission data and documentation procured. This ensures that all decisions regarding the inclusion of the new space player missions in the TPM portfolio can be made fairly and with confidence.
The EDAP Optical team, led by Telespazio UK, performs the latter on samples of data (e.g. still imagery, video, etc.) procured for a number of very high resolution optical missions. These preliminary assessments include assessments on geometric calibration quality, radiometric calibration quality and image quality. The assessed candidate TPM have included BlackSky (BlackSky), Jilin-1 (Head Aerospace / Chang Guang Satellite Technology) and Dove / Dove-R / SuperDove (Planet), of which the results will be presented.
Radar altimeters require periodic external calibration to monitor the instrument drifts that are not covered by the internal calibration paths. These activities are normally performed by acquiring data over active transponders and analysing the performances. European radar altimeters such as CryoSat-2, Sentinel-3 A/B and Sentinel-6 rely on transponders on Svalbard and Crete to monitor their stability, while usage of passive reflectors for radar altimeter calibration has been not feasible as the required size for the corners was to big to achieve acceptable signal-to-clutter ratios (SCR). Nevertheless, recent developments in radar altimeter SAR-based algorithms allow now to obtain Very High Resolution (VHR) data in the along-track dimension by recombining all the echoed pulses within the illumination time in a coherent way. These new algorithms, known as Fully-Focussed SAR (FFSAR) algorithms, increase the SCR while drastically reducing the along-track resolution from the current ~300 m obtained with Unfocussed SAR (UFSAR) processing to sub-meter scale. Being aware that such improvement in resolution represents a new opportunity for passive reflectors, a trihedral corner reflector was designed and installed by isardSAT in one summit of the Montsec ridge in the Pyrenees in April 2021, about four kilometers from a Sentinel-6 ground track. After initial instrument gain and window delay adjustments, successful passes have been acquired from July 2021 and cyclic calibrations have been performed since then.
In this contribution, we present the preliminary results of a series of Sentinel-6 passes over the corner reflector where different calibration parameters are monitored: range and datation bias together with sigma0. Along-track and across-track resolutions are also monitored to ensure the point target behaviour of the instrument. Indeed, the fact that both along-track and across-track resolutions are consistent with the theoretical limits suggests that the mechanical design preserves the expected orthogonality of the system. Finally, since most of the results are comparable to what is currently achieved by means of active transponders, it is concluded that corner reflectors may play a role in future calibration of radar altimeters, especially due to their ease of installation, maintenance and long term stability.
The PlanetScope constellation is a constellation of more than 180 multispectral optical Earth observation cubesats. In this presentation we will focus on developments based on the latest generation of PlanetScope satellites, the SuperDoves, which deliver 8-band multispectral data at a spatial resolution of 3.7m.
We’ll be presenting the approaches used at Planet to sharpen SuperDove imagery along with the metrics used to quantify the concomitant improvement.
The modulation transfer function (MTF) describes the ability of an optical system to accurately record the spatial information of an object. The ideal MTF is that of an aberration-free, diffraction-limited system, where the MTF resembles the autocorrelation of the pupil function. Other optical aberrations and image quality artifacts influence the MTF and appearance of the resulting image. Between iterations of PlanetScope telescopes, changes were made to improve overall telescope performance, with the result being an improvement in image quality with a known and predicted slight attenuation of mid-spatial frequencies in the MTF. With the knowledge of the system changes, it is possible to design a set of sharpening kernels to address these attenuations. Per-band sharpening kernels were developed and successfully deployed to recover affected spatial-frequency content. We discuss the development of these kernels and demonstrate the performance improvements on real imagery.
Hot pixels and hot columns are a known problem in PlanetScope imagery, moreover are they typical artifacts that occur in digital imaging regularly. Their location on the sensor changes over time and they need to be corrected in the images, by subtraction. The main challenge is to reliably find the affected locations for every single scene. Once reliably detected, correction by subtracting a value is relatively straight-forward.
In digital photography hot pixels are defined as pixels which are over-reacting to incident light, whereby their appearance is brighter. They are caused by electrical charges that leak into the sensor wells, and they will get worse and appear more frequently when the sensor is hot. During readout a hot pixel can excite its entire column and thus create a hot column.
While it is believed that hot columns and hot pixels are primarily caused by radiation damage in space, they have also been observed during camera testing on the ground.
Because of the time delay integration (TDI) that PlanetScope satellites are run with, a hot pixel gets smeared along the column over the number of rows corresponding to the number of used TDI steps and that way forms the typical streaking pattern.
The fact that the hot columns and hot pixels appear and disappear on the sensor over time makes it extremely hard to correct these artifacts, because it necessitates a per-scene detection and correction. Previously we were running the detection on every single scene and correcting the thus identified pixel locations by subtraction. Due to the intermingling between the scene content and the hot pixels and columns, especially the hot pixels were very unreliably detected. And if they were not detected, they would not be corrected and we had artifacts still remaining in the imagery.
The crucial step to improve from the previous hot pixel and hot column detection method is to abstract the detection from the scene content of individual scenes. We will present the approach taken to overcome this problem and to reliably detect hot columns and hot pixels in PlanetScope imagery along with statistics and quantitative measures of the resulting quality improvement.
A satellite image can be considered as a convolution of the surface radiance and an impulse response function. An impulse response function is a point spread function (PSF) mainly determined by optical system and detector characteristics. A minor contribution to PSF comes from atmospheric scattering phase function. The concept of pixel or ground spatial distance (GSD) is simply a nominal ground pixel size defined by the instantaneous field of view (IFOV). An impulse response function is the measure of effective pixel size, which is determined by additional threshold (e.g. exp(-2) level). One conventional method to estimate PSF is to use a boundary line between two high contrast reflective surfaces. An edge spread function (ESF) is estimated by compiling oversampled line transects. A line spread function (LSF) is obtained by derivative of ESF. Relative edge response (RER), FWHM of the LSF, and MTF metrics are computed. Original un-convoluted raster image can be simulated based on precise geometry of boundary line resulting varying degree of mixture and then convolution is applied using PSF. Using the simulation, the sensitivity of all metrics and numerical limit of the PSF are presented.
Airbus Intelligence UK has recently signed an agreement with Surrey Satellite Technology Ltd (SSTL) to access 10% of the NovaSAR SAR imaging capacity to boost the Airbus Radar constellation product portfolio.
NovaSAR-1 is a small SAR mission in S-band at 3.2 GHz (9.4 cm wavelength) designed for low-cost programmes with a repeat cycle of 14 days and with a local time on ascending node (LTAN) at 10:30. This is different to most spaceborne SAR sensors which operate with a LTAN of 18:00. Beyond the availability of standard Level-1 complex and ground range products, to facilitate the exploitation and uptake of NovaSAR data for non-expert users, Airbus UK is developing a ground processing chain able to receive and process NovaSAR data from raw to an Analysis Ready Data (ARD) product, available for commercial applications.
This product is defined as radiometrically enhanced and geocoded terrain corrected data, which will follow the standards of the CEOS CARD4L framework for the Normalised Radar Backscatter product family.
In addition, Airbus UK has been providing VHR Earth imaging data from Vision-1 since 2019. This data has a resolution up to 0.87m for panchromatic and pansharpened multispectral data, and 3.48m for multispectral (VNIR) data.
Vision-1 imagery is routinely geometrically corrected and processed to a precise orthorectified product using the Airbus OneAtlas library as reference base map. By design, NovaSAR and Vision-1 are located on the same orbit and Airbus UK plans to acquire quasi-concurrent acquisitions to demonstrate their interoperability by using ARD products from both sensors, to facilitate the exploitation of Opti-SAR applications.
For best results in any application of such fusion data, it is important that both datasets (Vision-1 and NovaSAR) have excellent co-location with one another as well as with the ground. We will present the results of our analysis, demonstrating the level of co-location of Vision-1 and NovaSAR ARD.
Airbus Intelligence UK plans to have the NovaSAR ARD operationally available from the beginning of 2022.
Global temperatures are warming faster than ever before. The year 2020 was officially the second warmest year since temperature records began, according to the World Meteorological Organization (WMO). Europe is one of the hot spots of global warming: a recently published study by (1) has shown that the LST in Europe has warmed particularly strongly in recent years. Satellite measurements of land surface temperature (LST) are an indispensable tool for climate change research. Because of their strong link to ground-level air temperature, they can be used to directly calculate large-scale temperature trends and anomalies. However, to make climate-relevant statements, daily and multi-decade large-scale observations are needed. Only the AVHRR sensor provides this unique spatial and temporal coverage.
In the TIMELINE project (2), consistent LST products were developed from AVHRR over Europe and North Africa for the period 1981-2018 using the method of (3). The daily, 10-daily and monthly Level 3 products contain statistics of LST (minimum, maximum, median, mean) for the respective period. Only high quality LST values are used, which is ensured by filtering and masking based on quality and uncertainty variables. The underlying Level 2 LST product was recently validated with in situ and MODIS LST (4), resulting in errors of 1.83 K and 2.34 K, respectively, which is within the range of comparable validation studies.
In this study, we present results of the first analysis of long-term dynamics of LST at 1 km resolution over Europe and North Africa based on the Level 3 TIMELINE products. Analysis of LST from AVHRR at observation time are complicated by the fact that the data comes from several satellites, all of which have different overflight times, which additionally vary due to orbit drift during their lifetime. Several statistical models, that use e.g. the sun angle, and physical models that reconstruct the LST diurnal cycle were trained and evaluated for their suitability to model TIMELINE LST to a standard observation time.
The evaluation of the models for the LST daytime correction shows, that the performance is dependent on land cover, climate conditions and data availability. Furthermore, a mix of statistical and physical models dependent on these parameters shows the best performance. Regions with high uncertainty due to the daytime correction and with land cover changes during the last four decades should be excluded from the analysis of the long-term dynamics of LST. Preliminary results for selected sites show a positive trend of 0.6 K/decade for arid regions and 0.15 K/decade for Mediterranean grassland. The results contribute to a better understanding of climate change in Europe and North Africa.
1. Liu J, Hagan DFT, Liu Y. Global Land Surface Temperature Change (2003–2017) and Its Relationship with Climate Drivers: AIRS, MODIS, and ERA5-Land Based Analysis. Remote Sensing. 24. Dezember 2020;13(1):44.
2. Dech S, Holzwarth S, Asam S, Andresen T, Bachmann M, Boettcher M, u. a. Potential and Challenges of Harmonizing 40 Years of AVHRR Data: The TIMELINE Experience. Remote Sensing. 10. September 2021;13(18):3618.
3. Frey C, Kuenzer C, Dech S. Assessment of Mono- and Split-Window Approaches for Time Series Processing of LST from AVHRR—A TIMELINE Round Robin. Remote Sensing. 13. Januar 2017;9(1):72.
4. Reiners P, Asam S, Frey C, Holzwarth S, Bachmann M, Sobrino J, u. a. Validation of AVHRR Land Surface Temperature with MODIS and In Situ LST—A TIMELINE Thematic Processor. Remote Sensing. 1. September 2021;13(17):3473.
The ESA Climate Change Initiative Land Surface Temperature project (LST_cci) aims to significantly improve the ability of satellite LST data records to meet the GCOS requirements for climate. This paper describes the advances made in two LST_CCI data records: first, the long term IR Climate Data Record (CDR) and second, the merged GEO/LEO IR data record.
Investigation of climate change requires data records that are both long-term and stable, i.e. free from drift or step changes in bias. Satellite missions are typically comparatively short (6 – 10 years) so a climate record requires data from more than one sensor. However, successive sensors will exhibit calibration differences and there may be differences in the overpass times of their respective platforms. The LST_CCI CDR utilises data from sensors on sun-synchronous Low Earth Orbiting (LEO) platforms: the Along-Track Scanning Radiometers (ATSRs) on ERS-2 and Envisat, the Sea and Land Surface Temperature Radiometers (SLSTRs) on Sentinels 3A and 3B and the Moderate Resolution Imaging Spectroradiometer on EOS Terra. This paper describes the methods used to harmonize data between instruments, including intercalibration of brightness temperatures using spectra from the Infrared Atmospheric Sounding Interferometer (IASI) as reference, the common LST retrieval algorithm, and corrections to LSTs for overpass time differences between sensors.
The LST_CCI CDR provides data at one local solar time in the day and one local solar time at night. However, resolution of the diurnal LST cycle requires data at more frequent intervals. Such data is available from sensors on platforms in geostationary earth orbits (GEOs) however these sensors do not have global coverage, coverage is restricted to a disc on the Earth’s surface. Furthermore, LST uncertainty increases with increasing atmospheric path length so that usually usable data is restricted to satellite view angles of below 60 degrees leaving no data in the high latitude regions. In contrast, coverage by LEO platforms is high in the Polar Regions and the different equator crossing times can be used to add information on the diurnal cycle. The CCI merged GEO/LEO IR product utilises the different overpass times of the ATSRs, SLSTRs, and MODIS Aqua and Terra to fill in coverage gaps left after merging available GEO data to generate a 3 hourly UTC product with enhanced coverage. The methods used to select and combine data from different instruments are described.
Examples of the two products are shown and further improvements to be made during Phase 2 of the LST_CCI project will be discussed.
Glaciers and ice caps worldwide provide a visible and sensitive response to climate change. ESA’s Climate Modelling User Group state that long-term, stable datasets of glacial essential climate variables, such as surface elevation, are required for trend monitoring and providing initial conditions for climate models [1]. Glaciated areas have been monitored from space by a continuous, overlapping succession of ESA radar altimetry missions since the launch of ERS1 in 1991, allowing the construction of just such datasets.
The method of deriving elevation change over time from altimetry was developed over flat regions such as ice sheets, and has been validated and used for science in such regions [2]. The terrain over ice caps and glaciers, such as those found on Svalbard, is more complex, introducing data gaps where the altimeter loses lock, especially in the earlier missions. Ground track spacing relative to the size of glacial features is also an issue. Nevertheless, these voids can be filled by a suitable interpolation scheme.
We present a cross-calibrated, 6-mission, surface elevation dataset from the glaciated regions of Svalbard, on a 1km by 1km grid, at 30-day resolution. The instruments used are the radar altimeters onboard ERS1, ERS2, EnviSat, CryoSat-2, Sentinel-3A and Sentinel-3B. The region chosen is that enclosed by the outlines of the Randolph Glacier Inventory . Data gaps are filled by geographical interpolation at every timestamp. Cross-calibration is performed by modelling the timeseries in each grid cell separately. The final dataset is used to produce estimates of surface elevation change rate, and compared to datasets made by other methods – laser altimetry and gravimetry.
This study has been performed as part of the ESA project Glaciers_cci (4000127593/19/I-NB).
[1] CMUG baseline requirements doc: CMUG CCI+ Deliverable Number: D1.1: Climate Community Requirements Submission date: 9 November 2020 Version: 2.2, at https://climate.esa.int/media/documents/CMUG_Baseline_Requirements_D1.1_v2.2_EUBGoPz.pdf
[2] The IMBIE team, Mass Balance of the Antarctic Ice Sheet from 1992 to 2017, Nature 558, 219-222 (2018), https://doi.org/10.1038/s41586-018-0179-y
The Copernicus Global Land Service (CGLS) provides a continuous set of bio-geophysical variables describing the dynamics of vegetation, the energy budget, the water cycle, and the cryosphere. The service ensures near-real time production and delivery of consistent long-term time series of global bio-geophysical variables. The CGLS portfolio includes the leaf area index (LAI), the fraction of PAR absorbed by vegetation (FAPAR) and the cover fraction of vegetation (FCOVER) products, which are derived every 10 days at 300 m (Collection 300m) and 1 km (Collection 1km) resolution. The products are delivered with associated uncertainties and quality indicators. The products are accessible free of charge through the GCLS website (http://land.copernicus.eu/global/), along with documentation describing the physical methodologies, the technical properties of products, and the quality of variables based on the results of validation exercises.
The Collection 1km of LAI, FAPAR and FCOVER products starts in 1999 with SPOT/VEGETATION data, and continues from 2014 to June 2020 with PROBA-V. The Collection 300m of LAI, FAPAR and FCOVER products is available from 2014 with PROBA-V and from July 2020 to present with Sentinel-3.
Satellite observations of surface reflectance are transformed into biophysical variables using machine learning techniques. There are two versions of the 1 km collection of vegetation products; Version 2 improves the continuity and consistency of former Version 1 with the use of smoothing and gap filling techniques, and it allows near real time estimation. The 300m retrieval algorithm is similar to the one used for Version 2 of 1km products but without gap filling.
This talk will focus on the retrieval algorithms used to generate the CGLS Collection 1km and 300m of LAI, FAPAR and FCOVER products. The CGLS products will be assessed based on the comparison with other existing satellite products and ground data. The consistency of the time series will be evaluated with due attention to the switchover from SPOT/VEGETATION to PROBA-V and from PROBA-V to Sentinel-3. Finally, some applications of the CGLS biophysical products will be presented.
Long term global terrestrial vegetation monitoring from satellite Earth Observation system is a critical issue within global climate and earth science modelling applications. This paper describes the GEOV2-AVHRR global vegetation products of leaf area index (LAI), fraction of absorbed photosynthetic active radiation (FAPAR) and vegetation cover fraction (FCOVER) derived from AVHRR Long Term Data Record (LTDR) series from July 1981 to December 2020 and updated annually. GEOV2-AVHRR products are freely accessible at the Theia portal (https://postel.theia.cnes.fr/atdistrib/postel/client/#/home) at 0.05° and 0.5° spatial resolution and 10-day frequency.
The GEOV2-AVHRR algorithm was designed to ensure (i) GCOS requirements and (ii) high consistency with the biophysical products developed in the recent years, and particularly with the Copernicus Global Land GEOV2-CGLS products derived from VEGETATION and PROBA-V sensors. First, neural networks trained with CYCLOPES and MODIS products transform LTDR AVHRR top of the canopy directionally normalized reflectance data into vegetation biophysical variables at 0.05º and at a daily time step. Second, dedicated temporal smoothing and gap filling techniques are applied every 10 days with a 60 to 120 days compositing window, depending on the number of available valid observations. Finally, a 0.5° sub-sampled product useful for climate and meteorological modelling, named Global Change Monitoring (GCM), is generated. The products are delivered with associated uncertainties and quality indicators.
The GEOV2-AVHRR products were validated by comparison with existing C3S, GIMMS3g and GLASS AVHRR products, and ground measurements (accuracy in terms of root mean square error of 0.81 LAI, 0.10 FAPAR and 0.13 FCOVER). Consistency with GEOV2-CGLS was successfully checked as well as the temporal consistency across the several AVHRR sensors used to build this long time series.
We will describe the GEOV2-AVHRR time series, the principles of the algorithm used, the results of the validation process and the way to freely access to the archive and associated documentation at the Theia portal (https://www.theia-land.fr/product/serie-de-variables-vegetales-avhrr-fr/).
From 1986 to 2015, SPOT 1 to 5 satellites have acquired more than 30 million images all over the world, which represents a massive historical dataset.
These SPOT images were acquired in panchromatic and multispectral bands (green, red and near-infrared for SPOT 1 to 3 and in addition in mid-infrared for SPOT 4 to 5) at spatial resolution comprised between 5 and 20 meters. These images were available through commercial contracts with Airbus Defence and Space.
SWH (SPOT World Heritage) is a CNES initiative consisting in retrieving and preserving these SPOT data, in generating SPOT products at current standard levels and in making them freely available. In June 2021, this project achieved the release of 19 million Level-1A SPOT images, which were made freely available in catalogue, called REGARDS (https://regards.cnes.fr/user/swh/modules/60). The next step consists in making higher processing levels available, thanks to user on-demand Level-1C (image orthorectification) and Level-2A (atmospheric corrections, including environment and slope corrections) processings, which will be available for SPOT products on REGARDS catalogue.
This poster deals with the atmospheric correction algorithm design and with SPOT Level-2A products validation.
SPOT atmospheric corrections algorithm definition have benefited from the work done on MAJA processor by CESBIO (Center for the Study of the Biosphere from Space) and CNES. A part of the algorithms of this software, which is used to produce SENTINEL-2 surface reflectance products within THEIA, the French land data center, were re-used for SPOT. Only a part was re-used because SPOT images do not contain as much information for atmospheric corrections as SENTINEL-2 (no repetitive scene acquisition with constant viewing angles, no water vapour neither cirrus or blue band). For SPOT, the scene water vapour contents and aerosol optical depths, whose determination is a prerequisite to molecular and aerosol scattering and diffusion corrections, are read in ancillary data from reanalysis and climatology. The validation was performed by comparing aerosol optical depths and water vapour contents to in-situ measurements from AERONET photometers network. The retrieved SPOT surface reflectances were validated by comparison to in-situ surface reflectance measurements, which were acquired by one of the automated ground stations operated by CNES at La Crau site in Southern France. This method, using ancillary data for water vapour and aerosol optical depth determination was also applied to Sentinel-2 and the retrieved surface reflectances were compared to those obtained with MAJA.
Understanding the state of the climate requires long-term, stable observational records of essential climate variables (ECVs) such as sea surface temperature (SST). ESA’s Climate Change Initiative (CCI) was set up to exploit the potential of satellite data to produce climate data records (CDRs). The initiative now includes over 20 projects for different ECVs, including SST, and is about to release the third major version of the SST CCI CDR which will cover a 40-year period using data from Advanced Very High Resolution Radiometer (AVHRR), Along Track Scanning Radiometer (ATSR), Sea and Land Surface Temperature Radiometer (SLSTR) instruments, Advanced Microwave Scanning Radiometer (AMSR)-E and AMSR2. The dataset includes both single-sensor products at L2P, L3U, and L3C; plus a Level 4 SST analysis generated using the Met Office Operational Sea Surface Temperature and Ice Analysis (OSTIA) system.
Version 3 of the SST CCI CDR will be the first to make use of data from AVHRR/1 instruments carried on board NOAA-6, -8, and -10 platforms. This will increase the data coverage in the 1980s and allow the dataset to extend back to 1980. The quality of the AVHRR retrievals has been improved by using a new biasaware optimal estimation (BAOE) technique and updated radiative transfer modelling including tropospheric dust which significantly reduces the SST biases due to dust aerosols seen in previous CDRs. Passive microwave AMSRE and AMSR2 data were previously available as an experimental produce from SST-CCI, but are now included in the main CDR for the first time. In comparison to the previous CDR, this new release will also use of full resolution MetOp data and include the dual-view SLSTR sensors.
Complementary to the ESA CCI, the Copernicus Climate Change Service (C3S) is producing an Interim CDR (ICDR) to proving an ongoing extension in time of the SST-CCI CDR. The C3S ICDR is algorithmically equivalent to the CCI CDR and will switch from extending the current version 2 record to extending CCI version 3 during 2022.
Since June 2021, 19 millions of SPOT scenes that can be downloaded free of charge from the CNES SPOT Word Heritage (SWH) catalog. These scenes have been acquired between 1986 and 2015 by the high-resolution (20-2.5m) Spot 1 to 5 Earth Observation satellite series, covering the entire Earth for nearly 30 years. The CNES is the owner of the data. Data is distributed through the REGARDS catalog (REnewal of Generic tools to Access and aRchive Space Data) at Level 1A. But these products are dedicated to expert users who wish to do their own geometric image processing. No geometric correction is applied, only a radiometric equalization to compensate the differences in sensitivity between elementary detectors of the instrument.
SWH scenes at L2A (equivalent to L1C in CEOS nomenclature, i.e. orthorectified reflectances at the top of the atmosphere) are required for any thematic studies. They offer opportunities for understanding locally how the Earth is changing, and for determining and monitoring the causes of these changes using for instance new generation of sensors such Sentinel 2 that provide time series at similar spatial resolution.
However, SWH scenes at L2A are not available in the catalog. They are partly processed at THEIA center, mainly over metropolitan France and West Africa.
To help users while waiting for the processing of L2A SPOT scenes to be completed at THEIA center, a tool has been developed to transform the L1A scenes into L2A scenes using a more simplified processing than the one used at THEIA. Level 2A scenes are rectified to match a standard map projection (UTM WGS 84), at a fixed altitude (no DTM is used) without using ground control points. Metadata associated to the scenes contains parameters to compute the Top Of Atmosphere radiance and reflectance. The location accuracy corresponds to the initial one of the products, about 500 m (RMS) for SPOT 1/2/3, 200 m (RMS) for SPOT 4, and 50 m (RMS) for SPOT 5.
The service is free, fast and available at https://swh-2a-carto.fr.
In the framework of the European Long Term Data Preservation Program (LTDP+), which aims at generating innovative Earth system data records named Fundamental Data Records (FDR) and Thematic Data Records (TDP), similar to level 2+ geophysical products, ESA/ESRIN has launched two years ago a reprocessing activity of the ERS-1, ERS-2 and ENVISAT altimeter and radiometer dataset. A large consortium of thematic experts has been formed to perform these activities which are 1) to define new tailored end-user products including the long, harmonized record of uncertainty-quantified observations, 2) to define the most appropriate and state-of-the-art level 1 and level 2 processing, 3) to reprocess the whole times series according to the upgraded ground processing and, 4) to validate the different products and provide them to a large community of users focused on the observation of the atmosphere, ocean topography, ocean waves, coastal, hydrology, sea ice, ice sheet regions. This activity will result in the production of 8 different datasets for each mission, each of them addressing a different need. The project kicked off in September 2019 and the first phase of definition and pre-validation is now completed. The production phase will begin in early 2022 and is expected to be finalized at the end of next year. Preliminary results already show major improvements compared to the current ESA ERS and Envisat Altimetry products.
The objective of this talk is to inform the LPS members of this initiative, to explain the main guidelines, constraints, and status of the project and to present the first results and improvements obtained during the first phase of the project. In particular, the presentation will show how the different communities (over global ocean, coastal, inland waters, sea ice, land ice, waves and atmosphere) will be able to benefit from this reprocessing to improve their long-term climate analysis.
Nitrogen dioxide (NO2) is involved in the catalytic cycles accounting for almost half of the ozone removal by gas-phase reactions in the upper stratosphere, whereas in the lower stratosphere it moderates the ozone loss by converting active ozone-depleting species into inactive reservoir forms. In the planetary boundary layer, nitrogen oxides control the abundance of tropospheric ozone through the reactions leading to photochemical smog.
Based on its peculiar spectral absorption of solar radiation, the NO2 vertical column density (VCD) can be estimated by ground-based and spaceborne instruments. Remote sensing techniques are able to probe the whole atmospheric column (or specific portions of it), whereas surface in situ instrumentation (e.g. air quality monitors) provides information on the NO2 burden at the surface and is sensitive to the mixing layer height. Estimates from satellite radiometers are usually associated with large uncertainties over complex terrains or very polluted areas due to assumptions on the air mass factors based on climatologies, and they might underestimate the local amounts owing to their generally low spatial and temporal resolution. For example, strong negative biases as low as −50 % were recently found in the NO2 total columns measured by Copernicus Sentinel-5P TROPOMI (TROPOspheric Monitoring Instrument) over Rome compared to ground-based Pandora spectrometers. Hence, the improvement of observations from space requires a continuous validation of satellite retrievals against column densities from ground-based instrumentation. However, whilst surface air quality monitors and satellites often benefit from long-term records, accurate multidecennial data sets from ground-based spectrometers are not yet widely available. Such series are also beneficial for climate studies and to evaluate the abatement effects on the tropospheric loads resulting from environmental policies and emission controls, vehicle fleet renovation, and even worldwide crises such as the recent (2008) economic recession or the confinement due to the SARS-CoV-2 pandemic.
For these reasons, a novel algorithm to retrieve NO2 column densities from ground-based direct-sun measurements with the MkIV Brewer spectrophotometer is presented. Compared to the original Brewer algorithm, still implemented in the operating software, the new algorithm includes updated NO2 absorption cross sections and Rayleigh scattering coefficients, and accounts for additional atmospheric compounds and instrumental artefacts, such as the spectral transmittance of the filters, the alignment of the wavelength scale, and internal temperature. Moreover, long-term changes in the Brewer radiometric sensitivity are tracked using statistical methods for in-field calibration.
The algorithm is applied to re-evaluate the NO2 data set collected in Rome by Brewer #067 between 1996 and 2017, and to estimate the NO2 long-term changes in this metropolitan environment. The Brewer retrievals are furthermore compared to independent estimates by a co-located Pandora spectrometer (#117), over a 1-year long period (2016-2017), showing linear correlation indices above 0.96 between slant column densities, slope of 0.97 and offset of 0.02 DU. This, incidentally, represents the first intercomparison of NO2 retrievals between a MkIV Brewer and a Pandora instrument. Furthermore, a comparison of the NO2 VCD monthly averages determined from the ground (Brewer and Pandora) and from space (GOME, SCIAMACHY, OMI, GOME-2, and TROPOMI) is provided as an example of the potentials of the new technique and of the challenges of satellite remote sensing over Rome.
The new retrieval technique can be replicated on the more than 80 MkIV spectrophotometers operating worldwide in the frame of the international Brewer network, and can be of interest to the international Brewer users community.
Arctic sea ice is an important climate indicator, because the effects of global climate change are amplified in the arctic. Current ESA CCI and EUMETSAT sea ice climate data records (CDRs) documenting this change are beginning with data from the Scanning Multichannel Microwave Radiometer onboard NASA’s NIMBUS-7 satellite October 1978 . However, there are satellite missions from the early and mid 1970s which can be used for mapping sea ice and for extending the current CDRs. One example is the data of the Electrically Scanning Microwave Radiometer (ESMR) on board the NIMBUS 5 satellite, which was operating between 1972 and 1977. The data are available online at NASA Earth data archive (GES DISC) and a first reprocessing of the data using modern sea ice retrieval methods has been carried out as part of the ESA CCI+ program. Results show that while older satellite instruments have their limitations compared to modern multi-channel sensors, they still provide useful data for mapping sea ice
extent and the distribution of sea ice type.
The work continues with a new PhD project building and running new methods to process historical satellite data in order to extend existing sea ice climate data records of sea ice extent into the past. The project is a part of the Danish National Centre for Climate Research (NCKF) at DMI and the research will provide insights into historical sea ice development and serve as an important sea ice extent reference from the 1970s, which can be used for input to climate models and re-analysis.
The aforementioned ESMR measured the horizontally polarized brightness temperature (TB) at 19.35 GHz for 78 different incidence angles from nadir to 63 degrees at the edges of the swath from December 1972 until May 1977 with some interruptions. With a swath width of about 3100 km ESMR provided full coverage of polar regions in half a day. . The sea ice concentration is derived from the data using a single channel algorithm and modern processing steps including dynamical tie-points, regional noise reduction with a correction based on a radiative transfer model (RTM) and numerical weather prediction model data.
The first results show interesting sea ice features in the years 1972-1977. One such feature is the Maud Rise Polynya in Antarctica, while another is the formation of the new ice Odden in the Greenland Sea, an ice tongue which extends eastward from the East Greenland Current. Both features were much larger in extent in the mid-1970s than they are today, providing an important reference of the past.
Current work on the ESMR dataset includes re-calibration, further noise reduction and validation/ inter-comparison to other sea ice datasets during this period and further investigation of other microwave data of the 1970s for comparison and filling gaps with respect to sea ice concentration (SIC). It will also be looked into other available historical optical and infrared sensors for sea ice surface temperature mapping, covering the 1960s and 1970s.
The dataset derived from the (Advanced) Along-Track Scanning Radiometer ((A)ATSR: AATSR (Envisat), ATSR-2 (ERS-2), ATSR-1 (ERS-1)) instrument series, spanning just over two decades, represents a valuable resource that is important to a variety of atmosphere-, cryosphere-, ocean- and land-based applications in the climate domain.
The main objective of the current reprocessing activity, the 4th (A)ATSR Reprocessing, is to generate L1B products, using an instrument processing facility developed by Telespazio UK, in a product format similar to the successor Sentinel-3 Sea and Land Surface Temperature Radiometer (SLSTR) L1B products. This will importantly allow for continuity between the (A)ATSR and SLSTR datasets.
The remaining objectives are to also provide dataset improvements and enhancements, building upon the quality of the 3rd (A)ATSR Reprocessing dataset. The latter includes, but not exhaustive, the following:
• Improved and Extended Datasets - the use of improved and extended consolidated (A)ATSR L0 and L1A datasets (resulting from the success of other ESA projects).
• Improved Surface Classification - the use of the Sentinel-3 Land Water Masks (Land/Water, Land/Ocean, Coastline and Tidal) will replace that of the Envisat Land Sea Mask.
• Include Uncertainty Estimates - the (A)ATSR L1B products will contain calculation of per-pixel uncertainty estimates (as shown in Figure 3), composed of random and systematic effects (e.g. blackbody calibration, spectral response), for each pixel.
• Improved Geolocation - the (A)ATSR L1B products will be orthogeolocated, for the first time, to a Digital Elevation Model.
• Meteorological Data - the (A)ATSR L1B products will contain meteorological data; numerical weather prediction model data, notably fields from the ECMWF ERA-Interim dataset.
Prior to official release, the dataset is subjected to a number of systematic (e.g. manifest file checks) and spot (e.g. absolute geolocation, nadir-oblique registration and surface classification accuracy) quality checks by the IDEAS-QA4EO (A)ATSR team. The results of these quality checks will be presented in this poster.
The dataset of the 4th (A)ATSR Reprocessing is expected to be released to users in the first quarter of 2022.
The project FDR4ATMOS (Fundamental Data Records in the domain of satellite Atmospheric Composition) has been initiated by the European Space Agency (ESA). Task A of the project covers the improvement of the SCIAMACHY Level 1b degradation correction, with the aim to remove ozone trends from the SCIAMACHY Level 2 data set that were introduced during the development of baseline version 9 (both data sets not released). We will also, for the first time, add calibrated lunar data to Level 1, covering the whole spectral range of SCIAMACHY and the full mission time.
The SCIAMACHY processing chain for better Ozone total column data: After the full re-processing of the SCIAMACHY mission with the updated processor versions, the validation showed that the total Ozone column drifted downward by nearly 2% over the mission lifetime. This drift is likely caused by changes in the degradation correction in the Level 1 processor, that led to subtle changes in the spectral structures. These are misinterpreted as an atmospheric signature. We updated the Level 0-1 processor accordingly and a full mission re-processing was done.
As a major improvement we additionally incorporated calibrated lunar data in the SCIAMACHY Level 1b product. In the new Level 1b product we will provide the individual scans of the moon as well as disk integrated and calibrated lunar irradiance and reflectance. The instrument performed regular lunar observations building up a unique 10 year data set of lunar spectra from the UV to the SWIR with moderately high spectral resolution. SCIAMACHY scanned the full lunar disk and over the ten year mission time made 1123 observations of the moon. Most satellites can only observe the moon under very specific geometries due to instrument-viewing and orbit restrictions. SCIAMACHY, however, with a two mirror pointing system was much less constrained and was able to observe the moon under an extreme large variation of geometries (especially during dedicated lunar observation campaigns), allowing it thus potentially to tie different satellites and geometry observations together. During the individual lunar observations, SCIAMACHY only saw a small slice of the Moon and scanned over the moon in order to obtain data for the full disk. We combined the individual calibrated scans, correcting for scan speed and the fact the Moon does not fill the entire slit length. The calculation of distance-normalized lunar reflectances did not require an external solar spectrum, but used solar measurements of SCIAMACHY itself.
This version of Level 1 will also be the first one that replaces the ENVISAT byte stream format with the netCDF format that is aligned with the product format of other atmospheric sensors like the Sentinels
The paper will present the improvements of the Level 1 product, the results of the quality control and validation.
Analysis Ready Data (ARD) is defined as EO data that has been processed in such a way that users can make use of it readily without any need for additional processing. ARD products allow interoperability not only through time but also alongside other datasets. Although the concept of ARD has existed for a while now, in recent years, there has been a greater push towards making them, in order to allow greater computational capabilities, as well as utilisation of the huge wealth of historic satellite data alongside recent missions.
An important step towards usability and accessibility of ARD data by a wide variety of users, is to have an internationally coordinated and a standardised approach towards specification of data. With that goal in mind, the Committee on Earth Observation Satellites (CEOS) created a strategy that lays out a foundation for the creation of Product Family Specifications (PFS). Currently, a number of PFS for Land applications – CEOS Analysis Ready Data for Land (CARD4L) are already released and cover both Optical as well as Radar instruments (https://ceos.org/ard/index.html#slide3).
This abstract refers to the CARD4L specification for SAR Normalised Backscatter (NRB) PFS (https://ceos.org/ard/files/PFS/NRB/v5.5/CARD4L-PFS_NRB_v5.5.pdf) and progress made towards the development of CARD4L NRB products for historic missions. ESA’s Long-Term Data Preservation (LTDP) programme has highlighted the need to generate a SAR product compliant with the CEOS requirements, for ERS and ENVISAT. In the last 3 years, the SAR Quality Control (QC) Team within Instrument Data quality Evaluation and Analysis Service (IDEAS)-QA4EO (and previously IDEAS+) service to ESA’s Sensor Performance, Products and Algorithms (SPPA) department, has been involved in the assessment of a CARD4L specification for historic ERS-1/2 SAR and ENVISAT ASAR data. From the start of 2021, a more dedicated progress has taken place, focussing on the development of a processor prototype that allows:
• Immediate analysis – by means of ensuring that CARD4L requirements related to radiometric terrain correction, projection of DEM etc. are implemented
• Interoperability – by ensuring that the same gridding and DEM are used as in the Sentinel-2 mission, thus expanding interoperability with Sentinel-1, Sentinel-1 NG, ROSE-L and BIOMASS missions
• Cloud computation capability – by developing the output product in the GeoTIFF Optimised Cloud (GOC) format
• Open science compliance – by developing an open-source software for the processor.
With much of the initial assessments already in place, the CARD4L NRB development for (A)SAR (ERS and ENVISAT) products is set to start in January 2022 and will last for 9 months. The development will be done in close coordination with the Sentinel-1 ARD prototype work, which will ensure that the CARD4L products for ESA’s historic missions are aligned with Sentinel-1 product.
A known limitation of (A)SAR products is that the geometric accuracy for these products does not currently meet the CARD4L NRB requirements. The CARD4L project will include a dedicated study focussing on improving the geometric accuracy of the ENVISAT ASAR products. The study will be performed in parallel with the NRB product development and will last for 7 months. At the end of the study, recommendations for geometric accuracy improvement to ENVISAT ASAR products, and by inference, the ERS-1/2 SAR products will be provided.
The poster will provide the latest updates on the above CARD4L project – processor development as well as the geometric accuracy improvement study. Details of the processor design will be included, as appropriate, as well as information on the latest NRB product tests. Similarly, the latest updates and achievements from the geometric accuracy improvement study will be made available.
ESA’s Soil Moisture and Ocean Salinity (SMOS) mission is dedicated to making global observations of soil moisture over land and salinity over oceans since November 2009. By consistently mapping these two important components in the water cycle, SMOS is improving our understanding of the exchange processes between Earth’s surface and atmosphere and is helping to improve weather and climate models. Furthermore, SMOS has also developed new products in recent years to provide information on the cryosphere and oceanic winds in storms.
Against the common perception that the Operational Ground Segments are static and conservative by its own nature, the architecture of the SMOS PDGS has kept evolving to respond to new operational and science requirements, the incorporation of new science products, and to improve the robustness, maintainability, and efficiency of the current system using the latest available techniques. The accumulated SMOS data volume combined with the need for periodic full mission data reprocessing, requires an increasing SMOS IT processing power and storage. This means not only deploying new processing nodes and storage devices, but also redesigning the architecture to optimize reprocessing procedures and allow parallelization.
This paper will summarize the efforts of the SMOS DPGS team in the preparations, execution, and dissemination of the Third Full Mission Reprocessing campaign of SMOS data. The objective of this campaign was to regenerate all the data obtained from SMOS with the new processing base from raw data to level 2 products.
To achieve the required performances during the reprocessing campaign, a new architecture with three instances running in parallel was deployed, and complex orchestration of L1 and L2 processing had to be addressed.
The SMOS 3rd Mission Reprocessing campaign illustrates in a paradigmatic manner all the improvements/changes required in order to reprocess large datasets and generate homogenous long time series for those cases where the processing paradigm was not designed initially with scalability and parallelisation in mind.
The lesson learned with this exercise can be also extremely useful for other heritage, live, and future missions.
To conclude, the dissemination strategies of the more than 1.8 million products generated and around 176 TB of data will be addressed.
Since October 1959, when the first Earth radiation budget sensor was launched on the Explorer 7 satellite, historical and current Earth Observation (EO) data has provided information about environmental and climate change that is of great value to today’s scientists and to decision makers in companies, in governments and in non-governmental organisations (NGOs). These data are also a legacy of immense value to future generations. However, for this immediate and legacy value to be realised, it is important that EO data sets are interoperable and temporally stable.
We need to be able to combine data from different sensors, to form multidecadal records from series of compatible sensors and to understand the quality and uncertainty associated with data sets to assess their fitness for purpose. Historical sensors, in particular, were not designed with these long-term records in mind, and in many cases, there is also a lack of available information on the exact design and operational conditions for the sensors. Nevertheless, there is such scientific value in these long-term records that it is well worth the effort to develop so-called “fundamental data records” (FDRs) that provide detailed, uncertainty-quantified and traceable information on the origin and quality of long-term records.
An FDR is defined (in the EU H2020 FIDUCEO project) as “a long, stabilised record of uncertainty-quantified sensor observations that are calibrated to physical units and located in time and space, together with all ancillary and lower-level instrument data used to calibrate and locate the observations and to estimate uncertainty”. FDRs are the fundamental output of a satellite sensor. They are provided for two reasons: first to record all the information needed by contemporaneous use of the FDR to generate climate data records or thematic data products from the FDR, and second to provide for the long-term data preservation of the data set, including all the information that future scientists will need to know to understand how the data set was determined.
Metrology, the science of measurement, provides a strong basis for both these applications. Formal metrological principles have ensured the long-term stability, world-wide consistency and unit coherence of the International System of Units (the SI), and its predecessors, since the signing of the Metre Convention in 1875. These principles, based on metrological traceability, rigorous uncertainty analysis and frequent formal comparisons, can be applied, albeit with some modification, to satellite data records too.
In the Quality Assurance Framework for Earth Observation (QA4EO) project, we have developed a framework, with corresponding guidance, to act as a base set of guidelines for the creation of metrologically rigorous FDRs. The process utilises principles from the FIDUCEO project, whereby a measurement function is defined, which shows how the measurand (e.g. radiance) is calculated from its input qualities (e.g. digital counts). An ‘uncertainty tree diagram’ is then used to document traceability and sources of uncertainty in the measurement function, with each error source identified in the diagram being subsequently codified in an ‘effects table’. These tables document the uncertainty associated with the given effect, the sensitivity coefficient required to propagate that uncertainty, and the error correlation structure over spatial, temporal and spectral scales for errors from this effect. This information can then be used to produce the FDR and evaluate its uncertainties.
A report is being produced to document this framework, which will be made available on the CEOS Cal/Val Portal.
Polar sea ice is both an indicator and a driver of global climate change with today almost up to 30 years of sea-ice thickness data from various space-borne sensors available to us. ESA’s Climate Change Initiative (CCI) aims to combine this data into a single dataset, achieve an improved spatial resolution, and best possible consistency across satellite missions. In its previous versions, the CCI sea-ice thickness climate data record (CDR) laid the foundation for a comprehensive multi-altimeter data record produced on Arctic and Antarctic sea-ice freeboard from October 2002 to April 2017. Building on this foundation, the data set was temporarily extended to include the ERS-1/2 satellite era in order to cover the winter months from October 1993/1995 until April 2022. In its current state, the CDR comprises various processing levels from orbit trajectories (L2p) to gridded (L3C) and gap-free (L4) data products.
In order to keep the data between the different sensors (ERS-1/2, ENVISAT, CryoSat-2) and sensor systems as consistent as possible (delay-doppler vs pulse-limited), we analyze and utilise dual-mission orbit crossovers in the respective operational overlap periods between all sensor systems, e.g. winter months between October 2010 and March 2012 for CryoSat-2 and ENVISAT. We only include measurements acquired within a 12.5 km radius within two (CryoSat-2/ENVISAT) to four (ERS/ENVISAT) hours. From the crossover data sets, optimal retracker parameters for the consistently used Threshold First Maximum Retracker Algorithm (TFMRA) are derived from recent to older sensors while using CryoSat-2 freeboard estimates as a reference. Subsequently, training data sets for the overlap periods and sensor combinations are created and used to calibrate and validate different machine-learning models to be used for the adaptive retracker threshold computation. Additional new features comprise the use of an updated data set and methods for snow on sea ice, improved uncertainty estimates, as well as the extension to interim sea-ice type data from C3S.
Short-comings still exist in particular for all Antarctic sea-ice retrievals due to large uncertainties resulting from the Antarctic sea-ice snow cover, as well as knowledge gaps on the sea-ice density distribution as well as in general for the marginal sea-ice zone (MIZ) both the in Arctic and the Antarctic. These limitations are planned to be addressed in future releases.
In the last 20 years, the body of literature about benthic mesophotic ecosystems is strongly increased, together with the awareness of both their biological richness and related ecological importance. Mesophotic ecosystems host coral, algae, and sponge assemblages constituting complex three-dimensional structures, mainly located between 30 m and the bottom of the photic zone. The 30-150 m bathymetric range is commonly adopted in the literature to constrain the mesophotic zone. The upper boundary of the mesophotic zone originates from physiologically imposed depth limits of conventional SCUBA diving whilst the lower boundary derives from the deepest occurrences of zooxanthellate corals documented in the late 80s at tropical latitudes. However, light penetration depth interval varies depending on solar radiation incidence and water clarity. This is especially obvious in the Mediterranean Sea which is characterized by strong climatic (e.g., rainfalls, sunlight), oceanographic (e.g., water temperature and salinity) and bio-geochemical (e.g., nutrients) that generate an alternation of temperate- and tropical-like situations within a relatively short distance (about 4,000 km from the Strait of Gibraltar to the Gulf of Iskenderun, southeastern coast of Turkey).
To define proper management and conservation plans not only a sound scientific comprehension of the dynamics governing marine ecosystems is required, but also a greater knowledge of their spatial distribution and extension. Including the light regime in the definition of the mesophotic zone would allow to not only constrain its bathymetric range and estimate the portion of seafloor characterized by mesophotic conditions but also appreciate variations related to local factors and geographical location.
We present a first assessment of the spatial and vertical extension of the mesophotic zone in the Mediterranean Sea based upon an open-access, physical approach that integrates long-term series of satellite information on water clarity and Photosynthetic Active Radiation (PAR) at sea surface to estimate the penetration of light along the water column and, thus, bathymetric range of the mesophotic zone. In the framework of the European project RELIANCE, the entire process has been encapsulated in a Research Object connected with the European Open Science Cloud (EOSC) services which ensure the full repeatability of the approach.
Our study aims at providing information to support management plans and conservation measures targeting the mesophotic natural resources and at representing a baseline to monitor future variations of the mesophotic zone in the Mediterranean Sea in relation to global changes.
Multispectral Electronic Self-Scanning Radiometer (MESSR) and Visible and Thermal Infrared Radiometer (VTIR) products from the National Space Development Agency of Japan (NASDA) Marine Observation Satellites (MOS-1 and -1b) were received by the European Space Agency (ESA) during the 1980s and 90s – the missions also carried a two-channel Microwave Scanning Radiometer (MSR), but that data was not the focus of this activity.
MESSR is like the data from the USGS Landsat missions with a swath width of 100km, 50 m spatial resolution and four multispectral bands in the blue, green, red and Near InfraRed (NIR). VTIR is like the data from the NOAA AVHRR missions with a 1500 km swath width, visible band at 900 m resolution and three NIR to thermal IR bands at 1.1 km resolution. The MOS satellites orbited at an altitude of around 900 km with a revisit period of 17 days and when both operated, this reduced to half as they were in the same orbit but separated by 180 degrees.
ESA has developed, through Exprivia, a processor that is currently undergoing validation and testing. The Data Services Initiative (DSI) will generate Level-1 data within a bulk processing campaign, and the Quality Assurance for Earth Observation (QA4EO) service is responsible for the Quality Control (QC) of the dataset. The overarching aim is to allow the MESSR and VTIR data from these historical missions to be accessible by the scientific community in GeoTIFF format. The data will be held within a SAFE ZIP file containing KML and PNG files for visualisation plus XML metadata and CSV quality report files.
The data is geometrically, but not terrain, corrected with Ground Control Points (GCPs) when there’s sufficient visibility for MESSR. If insufficient GCPs are matched, then GES (Geocoded Ellipsoid System Corrected) rather than GEC (Geocoded Ellipsoid GCP Corrected) data is generated for MESSR. The VTIR data is always GES as GCPs are not used.
Both datasets have a limited radiometric correction, with Digital Numbers (DNs) generated. So, the Earthnet Data Assessment Pilot (EDAP) has undertaken a preliminary cross-comparison of the MESSR data to Landsat-5 Thematic Mapper data to see if a conversion to radiometric units is possible. Previous processors, operated by NASDA and Canada historically, have undertaken these cross-mission comparisons to create MESSR and VTIR data in radiometric units. However, work is needed to understand if the required ancillary data is available and can be extracted from the ESA archive.
This poster will present the work undertaken through DSI and QA4EO for the bulk processing and QC alongside the EDAP radiometric cross-comparison exercise.
Detecting dune movement rates in desert margins of Central Asia over five decades based on satellite imagery
Lukas Dörwald, Frank Lehmkuhl, Georg Stauch
The detection of dune movement rates is a practice carried out in field since the 20th century and through remote sensing, once the technical requirements were met in the 1970th (Hugenholtz et al. 2011). A wide variety of satellite images from the last four decades are freely available in the archives of Sentinel-2 and Landsat-5 to -8 satellite images with spatial resolutions ranging between 10 and 25 meters. Completing these data sources, in this study, we are going even further back in time, using CORONA KH-4B images from the 1960s and 1970s. Despite its age, the KH-4B satellite delivered a considerably high spatial resolution of up to 1.8 m, thus bridging a considerable time gap of high resolution images and enabling the detection and mapping of features such as singular dunes and dune fields. After georeferencing, these images are utilized to detect and quantify sand dune movement and their respective rates in this study. These satellites have originally been used to record military intelligence images which were declassified for scientific use in 1995. First results of such measurements from the Gonghe Basin are prepared with focus on single dune movement rates, spanning the whole abundancy of dunes within the study area. The movement directions show a good fit to the prevailing wind patterns already, validating the mapped results. Further, the rates are being analyzed in different time intervals to check for a trend towards faster or slower movement.
In the arid and semi-arid areas of Central Asia, results of climate and land use change are already visible. In the last 50 years, covered by the used remote sensing data, changes in the morphology and the movement patterns of dunes and dune fields were observed in different dryland regions of northern China and in southern and western Mongolia (Qi et al. 2021). The behavior of the dunes, expressed mainly by movement rates and direction further will be compared to climatic models, such as ERA-5 and climatic indices like the NDVI. All further study areas are also located at the north-eastern edge of the Asian summer monsoon and the mid-latitude Westerlies, making them especially sensitive to climatic changes (Vimpere et al. 2020). Aim of a jointed Chinese-German BMBF project, called DUNE, is to utilize dunes as indicators for local regional climatic changes during the last 50 years combined with modelling approaches.
References:
Hugenholtz et al. 2011: Remote sensing and spatial analysis of Aeolian sand dunes: A review and outlook
Qi et al. 2021: Variations in aeolian landform patterns in the Gonghe Basin over the last 30 years
Vimpere et al. 2020: Continental interior parabolic dunes as a potential proxy for past climates
Riverside alluvium areas are prone to consolidation settlements that can induce serviceability and/or structural damage to constructions. Several techniques may be applied to monitor the soil displacements, usually complemented by in-situ measurements, however the use of satellite SAR images to monitor ground deformation has increased in last decades due to the high number of available satellites with better spatial resolution and shorter revisit periods. For this purpose, several approaches have been developed and proven (e.i. PSInSAR, SBAS, etc.), on urban areas, providing monitoring over wide areas.
To analyse the applicability of SAR images in consolidation settlements monitoring, a case study of an industrial area, close to the Tagus River in the northern part of Lisbon city (Portugal), was selected. The site was built in 2006 over alluvial soils and earthfill, in an area very close to the river Tagus.
The goal of the work was to analyse the ground displacements for the case study area, and surrounds, using the long-time PSInSAR series acquired by Envisat and Sentinel-1 from 1996 to 2020. InSAR time series of displacements were combined with in situ data.
From 2006 to 2015, an overall vertical displacement of ~1 m was measured by in-situ techniques. The ENVISAT data allows to estimate displacements rates of 50 mm/yr during the 1992-2003 period. Then, the displacement rates from Sentinel-1 processing are 1 mm/yr and in vertical and horizontal directions.
These large displacements need to be monitored and understood to prevent major damages and identify patterns.
Finally, the link between in-situ data and Sentinel-1 time series was made possible thanks to the heritage mission of ENVISAT. After combining the SAR datasets, the chronology of the displacements was identified from trigger to temporal evolution. However, monitoring of signals from satellites and in-situ techniques should be continued to constrain the long-term temporal evolution of motions and predict potential geohazards and risks.
The now 43-years long time-series of sea-ice extent (SIE) and area (SIA) are headline indicators of climate change. The interested public follows their seasonal evolution and potential record low and high values on online trackers, and climate scientists benchmark their model against them. The climate indicators are based on a multi-mission data record of sea-ice concentration (SIC), itself derived from brightness temperature measurements by passive microwave missions since the 1970s.
Over the past few years, we conducted a coordinated R&D effort from the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF) and the ESA Climate Change Initiative (CCI) programme. It has resulted in a collection of state-of-the-art sea-ice concentration climate data records, and their operational extensions (Lavergne et al., 2019). Version 2 of these data, released in 2017, was successfully transferred to the Copernicus Marine (CMEMS) and Climate Change Service (C3S), informed the IPCC Assessment Report 6 Working Group 1 report, and were used in the C3S reanalyses.
In this contribution, we introduce the latest version of these SIC CDRs, Version 3. We present key elements of the algorithm baseline as well as characteristics of the products. The algorithm baseline was designed to ensure climate consistency across the satellite missions, and to avoid potential artificial trends in the input and auxiliary data. The algorithms include 1) dynamical tuning of the algorithms and their tie-points, 2) reduction of the retrieval uncertainties using Radiative Transfer Models, and 3) per-pixel uncertainties. Specific R&D during Phase 1 of the CCI+ extension project led to an improved spatial resolution of the SIC data record, exploiting the near-90 GHz imagery channels available since the early 1990s. We have further explored the feasibility of adding Nimbus 5 ESMR data from 1972 to 1977 to the record for a future update. The product files are designed with several user communities in mind, and allow e.g. accessing more “raw” SIC data (before the last filters are applied) for Data Assimilation.
We also summarize results from an early evaluation against the Round Robin Data Package (Pedersen et al., 2019) and comparison to navigational ice charts. We particularly focus on the temporal consistency aspects across the different satellite missions from the 1970s to present.
References:
Lavergne, T., Sørensen, A. M., Kern, S., Tonboe, R., Notz, D., Aaboe, S., Bell, L., Dybkjær, G., Eastwood, S., Gabarro, C., Heygster, G., Killie, M. A., Brandt Kreiner, M., Lavelle, J., Saldo, R., Sandven, S., and Pedersen, L. T.: Version 2 of the EUMETSAT OSI SAF and ESA CCI sea-ice concentration climate data records, The Cryosphere, 13, 49–78, https://doi.org/10.5194/tc-13-49-2019, 2019.
Pedersen, Leif Toudal; Saldo, Roberto; Ivanova, Natalia; Kern, Stefan; Heygster, Georg; Tonboe, Rasmus; et al. (2019): Reference dataset for sea ice concentration. figshare. Dataset. https://doi.org/10.6084/m9.figshare.6626549.v7
Trewin, B., Cazenave, A., Howell, S., Huss, M., Isensee, K., Palmer, M. D., Tarasova, O., & Vermeulen, A. (2021). Headline Indicators for Global Climate Monitoring, Bulletin of the American Meteorological Society, 102(1), E20-E37. Retrieved Nov 26, 2021, from https://journals.ametsoc.org/view/journals/bams/102/1/BAMS-D-19-0196.1.xml
Measurements Of Pollution In The Troposphere (MOPITT) on the NASA Terra spacecraft has been measuring the global atmospheric abundance of carbon monoxide (CO) since March 2000. Direct emissions of CO are mainly produced by incomplete combustion from both natural fires and anthropogenic activities, and CO is also produced chemically from methane and volatile organic carbon (VOC) species. Although CO has a negligible contribution to greenhouse gas absorption, it does play an important role in atmospheric chemistry and climate because it is a dominant sink for the hydroxyl radical (OH) and thus affects the abundance of methane (CH4) and ozone (O3). Because of these interactions, the IPCC AR6 estimated that anthropogenic emissions of CO have a significant indirect radiative forcing of 0.23 W/m2. The MOPITT record is long enough to detect significant trends in atmospheric pollution and assess changes in emissions due regulations, agricultural burning, technology improvements to combustion efficiency and increasing wildfires due to a warming climate. We will present an overview of the MOPITT data record, how our algorithms have adapted to instrument changes on-orbit and discuss the continuation of the MOPITT record with recent and planned satellite CO observations.
The Centre for Environmental Data Analysis holds a large heterogeneous archive of environmental data (with over 7000 datasets and in excess of 340m individual files) dominated in volume by large datasets from the Sentinel missions and Climate model simulations (CMIP5 and CMIP6).
In recent years STAC (SpatioTemporal Asset Catalog) has gained traction in the Earth Observation community as a standard for cataloguing and search of satellite data products. We present our work to extend the application of STAC firstly as a universal interface for all CEDA’s varied data holdings and secondly we report on a large collaborative undertaking with US and European partners to apply STAC as the new standard interface for data discovery for the Earth System Grid Federation (ESGF). ESGF is a globally distributed data archive including over 20 hosting sites (nodes) which holds key climate-related datasets including CMIP5, CMIP6, CORDEX and Obs4MIPS.
Collaboration efforts have included active engagement with the US Pangeo community where their starting point has been a desire to develop a standards-based solution for static cataloguing of analysis-ready copies of CMIP data serialised to object storage on Google Cloud using Zarr. This has borne fruit in a broader effort working together with US partners GFDL (Geophysical Fluid Dynamics Laboratory) with the first public cloud-hosted ESGF node deployed on AWS.
The legacy ESGF search system benefited from a system of controlled vocabularies supporting faceted search - The Data Reference Syntax. Using STAC’s Filter Extension, it has been possible to implement this in a fully-featured STAC API for ESGF. Using this filter feature enables users to effectively identify data along the dimensions dictated by each search facet. Together with the adoption of new object store technologies we see potential for this to enable users to select arbitrary subsets of data for processing and analysis which hitherto have been impracticable.
Looking beyond the ability to describe search facets for a given dataset, we have made advances with the use of linking vocabularies which will enable cross-walking of search facets across different data types. Alongside ESM data, Sentinel and airborne datasets (FAAM) have been integrated into the search index as part of broader system testing. We will be seeking to expand this work to obtain near full coverage of CEDA’s data archive. Our experience, therefore, is that there is the potential for greater convergence in this data discovery and cataloguing technology beyond Earth observation data to a broad coverage across the environmental sciences.
Besides the adoption of STAC as a frontend search API, we describe the underlying engineering to facilitate a modular and scalable system as part of a living data archive, able to index content from object store on public cloud as well as more traditional systems on-premise. This has involved the utilisation of NoSQL database Elasticsearch and scalable systems for data ingestion with the RabbitMQ message passing system. This infrastructure shares commonality with a parallel development effort whose goal is to develop systems to migrate data between different storage media in an agile and seamless manner to better support user analyses and processing workflows. This solution utilises object store as a buffer interface to cold storage and will be likely to build on our work with STAC as part of a common data model.
Earth Observation Satellite data is an integral part of current environmental activities – it represents one of the most valuable, costly and irreplaceable assets from space; it is therefore essential to manage it meticulously. In 2012 ESRIN, ESA’s center responsible for the management of Earth observation data commissioned the ‘Data Service Initiative’ (DSI) ITT as a flexible and powerful industrial tool to consolidate , bulk-process, maintain and manage large data sets of ESA and Third Parties missions.
The X-PReSS consortium led by Serco has managed the DSI service since 2013, with three major lines of action. The first is consists in collecting data from various sources, harmonizing, cleaning, filling gaps where possible, generating consolidated master datasets. The second major activity was to reprocess or bulk-process these datasets (by means of processors provided as CFI by ESA) and generate higher level data with increased accuracy and quality. The third overarching line of activity in DSI has been the thorough management and maintenance of all the data received, treated and repatriated by DSI through a solid Data Management System, essential for the preservation and exploitation of the data. DSI took a novel approach to the data management tasks and here we will present: the story; the added value to these unique assets; the innovative service approach based on cost modelling and predictable cost and thorough data configuration control and data management used to reach such results.
Sentinel ARD and EO data and services at CEDA
Ed Williamson & Steve Donegan
The Centre for Environmental Data Analysis (CEDA) provides access to a very large archive of Earth Observation and Atmospheric Science data. The CEDA archive is currently over 20 petabytes in size and in the last year over 650 terabytes of data has been directly downloaded by users of the CEDA archive.
The Joint Nature Conservation Committee (JNCC) and Department for Environment, Food and Rural Affairs (Defra) are producing Analysis Ready Data (ARD) for most areas of the UK from Sentinel 1 and Sentinel 2 products. These are supplied to CEDA from the Defra Earth Observation Data Service (EODS) and the JNCC Simple ARD service and will ensure access to this data in the long term. Data from the JNCC Simple ARD service is processed using the JASMIN data-intensive supercomputer. These products are delivered in Cloud Optimised GeoTiff (COG) format for users to access.
The CEDA processing facilities provided by JASMIN allow the ARD data products to be easily accessible for users wishing to process this and any other data for derivative products. Users can access the data via simple web downloads via a DAP interface, or process into a dedicated Group Workspace (GWS) located in the JASMIN storage component. Furthermore, data accessed via the web is served via DAP that allows users to pull data directly into GIS applications without having to explicitly download the resulting products. Data are currently searchable via the CEDA metadata catalogue, as well as our GUI satellite data finder or via an OpenSearch endpoint. CEDA is dedicated to providing new interfaces and access to the data using the latest methods such as a SpatioTemporal Asset Catalogues (STAC) on the CEDA archive contents.
The Sentinel ARD data is located alongside an archive of conventional MODIS, Sentinel1, 2, 3 and 5p data with product coverage optimised and coordinated by the National Centre for Earth Observation (NCEO). CEDA also provides access to data from the ESA CCI, EUMETSAT as well as data from the UK Airborne Research and Survey Facility (ARSF). Users are able to access all this, plus other products from CEDA via the web and using JASMIN.
The exponential growth of satellites in orbit and connected Earth Observation data streams, reaching to petabytes, require archive operators to find ways to store data in a secure and cost-effective way. Addition to this the demand from the user community side is to receive data within minutes and hours instead of days. Upgrading to the data archives requires careful management of existing EO products together with handling live data feed. Transferring to cloud-based infrastructure, object-oriented databases, faster and more dynamic data storage space means that real time data overwatch mechanisms, versioning and data provenance needs to be applied during the upgrades and continued in the operational phase.
To tackle these challenges there is an ongoing technology demonstration on ESA Heritage mission long term data preservation archive. When applying blockchain technology to the data collection, storage and dissemination processes the verification capability and chain of custody of EO data processing chain can be delivered. The focus is to engage in the data collection process as a first step and follow it by setting up technology to ease the transformation to a new data archiving platform. Creating the link between the existing data products and the new data structure/system setup as well as improving the operational EO data archiving and processing platform. Expectation is that blockchain technology should enable the full end to end chain of custody for data through its lifecycle and guarantee the integrity of process for applications that collect and make decisions based on actionable reports.
Blockchain is a particular type of data structure used in some distributed ledgers, which stores and transmits data in packages called ‘blocks’, connected to each other in a digital ‘chain’. Blockchains employ cryptographic and algorithmic methods to record and synchronize data across a network in an immutable manner. These features can be applied in the Long Time Data Series in real time to provide visibility when vast amounts of data is transferred between different entities responsible for EO data storage, processing, and dissemination. The challenge of having loose mechanisms to handle data handover, versioning, naming inputs and outputs can result in errors and loss of valuable data.
Applying the Data Provenance and Origin Verification mechanisms to L0, L1 and L2 product generation will result in tokenizing each step that can deliver evidence to the database operator that can be provided to all users that request the data and enable visibility in the full processing chain.
There are two key elements that are focused with the current technology solution. First is to complement the long-term data series transfer to object-oriented database structure with provenance and backward compatibility mechanism. Secondly to make use of object-oriented database properties for visibility, versioning and reduction of complex data management.
The potentialities of Iceye’s X-band SAR data for mapping soil and vegetation characteristics over an agricultural area in Spain are analyzed and compared to the ones of Sentinel-1 C-band SAR data for the second half of the irrigation season of 2021. Specifically, the capabilities of Iceye data in mapping soil roughness and crop type are investigated respectively over areas where superficial and volumetric backscatter are predominant. As previous contributions demonstrated, X-band SAR signal is indeed sensitive to soil roughness and texture over bare soil (Baghdadi et al., 2008; Anguela et al., 2010) and to crop types over vegetated areas (Paloscia et al., 2014).
At first, a supervised classification is computed for every Iceye image for distinguishing the areas characterized by the two main scattering mechanisms. A further unsupervised classification (K-Means) is then applied to areas characterized by the same dominant backscatter mechanism. The latter classification maps homogeneous areas ascribable to different soil roughness characteristics or to different crop types depending on the prevailing backscattering mechanism.
The above mentioned classifications are also applied to Sentinel-1 imagery (VV and VH bands) acquired in the same dates as the data of Iceye. The classification maps obtained from the two missions are then analyzed and compared. Both Iceye and Sentinel-1 data are preprocessed with SNAP and OrfeoToolBox open source software.
Finally, the maps are tested in an algorithm for retrieving superficial soil moisture’s variations from Sentinel-1 VV band. They are used for individuating homogeneous areas on which to estimate the parameters of the model used in the retrieval algorithm. A Change Detection method is applied on the VV band of Sentinel-1 for estimating the changes of the SAR signal ascribable to variations in the soil moisture conditions. In situ soil moisture data are available on the study area and are used for calibrating and validating the model. The measurement network is equipped with 24 measurement stations included in the International Soil Moisture Network.
REFERENCES
Anguela T.P., Zribi M., Baghdadi N. and Loumagne C., (2010), Analysis of Local Variation of Soil Surface Parameters With TerraSAR-X Radar Data Over Bare Agricultural Fields, in IEEE Transactions on Geoscience and Remote Sensing, vol. 48 (2), 2010, DOI: 10.1109/TGRS.2009.2028019.
Baghdadi N., Zribi M., Loumagne C., P. Ansart and Anguela T.P., (2008), Analysis of TerraSAR-X data and their sensitivity to soil surface parameters over bare agricultural fields, Remote Sensing of Environment, vol. 112 (12), 2008, DOI: 10.1016/j.rse.2008.08.004.
Paloscia S., Santi E., Fontanelli G., Montomoli F., Brogioni M., Macelloni G., Pampaloni P. Pettinato S., (2014), The Sensitivity of Cosmo-SkyMed Backscatter to Agricultural Crop Type and Vegetation Parameters, in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7 (7), 2014, DOI: 10.1109/JSTARS.2014.2345475.
The Earthnet Data Assessment Pilot (EDAP) is an ongoing project of European Space Agency (ESA) to assess the quality and suitability of data from third party missions with other space agencies and commercial contractors.
As an organization that was founded on the principals of using space for peaceful purposes, it seems suited that ESA remains a leading organization by planning to work with major players around the globe to gain access and leverage existing technologies, thereby strategically positioning Europe for the future and enhancing local monitoring capabilities and driving innovation.
Private contractors are innovating at a rapid pace, driving development of technology in the field of space. This paper will provide a use case for the advantages of making use of a European Company like HEAD Aerospace, France based in Paris, as a data partner to supply satellite imagery from a wide and expanding range of Chinese sensors, which already includes advanced technologies like night image and video, operational hyperspectral sensors, sub meter multi spectral imaging capabilities multiple times in one day and rapid collection capabilities using sensors with up to 150km swath at 50 cm resolution.
Satellite Imagery supplied through HEAD Aerospace, France, from the following missions are currently being considered as part of the Very High Resolution (VHR)(a) , High Resolution (HR) and Medium Resolution (MR) Optical domain of EDAP, for which Quality Control Assessments will be performed. Further missions are expected to be added throughout the project.
Jilin-1 Constellation
The Jilin-1 satellite constellation consists of commercial Chinese Earth Observation sensors manufactured & operated by the Chang Guang Satellite Technology Limited (CGSTL) and commercialized globally by HEAD Aerospace as the strategic master distribution of CGSTL. There are currently (b) 33 on-orbit JL satellites in which 31 satellite offering submeter resolution optical imagery. The JL constellation will further be expanded with confirmed satellite launch schedule of 35 satellites planned in 2022. The full JL constellation is expected to have between 130 to 150 satellites by 2023, targeting to provide every 15 minutes revisit at global scale.
The wide choice of sensors of the JL constellation is indeed significant, highlighting the innovation in remote sensing technology. These sensors demonstrate the capability for very large-scale monitoring (eg 40,000km² at once) at 0.5m resolution thanks to the extra wide swath of 150km of the EarthScanner satellite. New technology innovation includes the Nightvision constellation with 9 on-orbit satellite to capture night imagery at 1m range resolution for illegal activity and light pollution surveillance, video from space at 1m resolution with use case of vehicle speed measurement, optical and hyperspectral imagery for various applications including environmental monitoring, forest
management, energy, mining, land planning. In addition, the two operational HyperScan satellites offer the doubled number of spectral bands and half of the resolution compare to Sentinel, cost effective operational data as complementary to application using Sentinel ideal for agriculture monitoring, deforestation as well as water resource monitoring.
The EDAP optical team have been assessing the data provided for the following Constellations and Satellites::
NightVision & Video constellation (JL1-SP & JL-1GF3)
EarthScanner (JL1KF1A/B)
JL Stereo (JL1GF02A,D,F & JLGXA)
DailyVision (JL1GF03)
HyperScan (JL1GP1/2)
Please find additional constellation and sensor details attached.
Reference.
(a) https://earth.esa.int/eogateway/activities/edap/vhr-hr-mr-optical-missions
(b) As of end November 2021
Planet is developing its next-generation fleet of very high resolution satellites, called Pelican, that are scheduled to begin launching in 2023. This new constellation of satellites will complement the core Copernicus Sentinel missions by providing an extremely reactive, very high resolution acquisition capacity for anywhere on Earth, several times per day at different hours, with a very low latency. Natural disasters and emergency management will benefit from Pelican’s true high temporal resolution, while other Copernicus services like Security, Climate Change, and Land will benefit from its reactivity, its imaging capacity, and its image quality and accuracy. As a potential Third Party Mission, it will open a wide range of new applications which were not considered to date as intra-day revisit capabilities are not yet seen as available.
This presentation will provide a deeper look into the Pelican constellation, its characteristics, its development status, and its anticipated performance. We will share insight into the orbital choices, the resulting mean time to access (MTTA) and the selected approach to achieve high reactivity and low latency. We will also highlight key characteristics of the payload such as the spectral definition, the satellite design and performance, agility and geolocation accuracy. Examples of utilisation of the specific Pelican capabilities will be given in various domains.
When monitoring or responding to natural hazards or disasters, end users require data that is easy to access and understand. Synthetic Aperture Radar (SAR) sensors can image the earth’s surface regardless of cloud conditions or light availability, making it a valuable dataset for mapping surface processes during natural disasters. However, SAR data has traditionally been difficult to process, access, and interpret, limiting the uptake of SAR by a number of user communities.
The NASA Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) hosts and serves the entire Sentinel-1 SAR archive from the NASA Earthdata Cloud, which is built on Amazon Web Services (AWS) infrastructure. Analysis-ready Sentinel-1 products, such as Radiometric Terrain Corrected (RTC) datasets, can be processed on demand to user specifications using ASF’s Hybrid Pluggable Processing Pipeline (HyP3) platform. HyP3 leverages cloud computing, processing the data in the same cloud environment used for storage. While this service makes analysis-ready imagery available to users quickly and efficiently, most users still need to download the data before it can be explored or analyzed.
As part of ASF's mission to make remote sensing data more accessible, we have been exploring ways to make it easier for users to interact with our on-demand products. By publishing these cloud-optimized analysis-ready SAR datasets to ArcGIS image services directly from cloud storage, ASF allows end users to access and explore large datasets with extensive spatial and temporal coverage without having to download any data. The data is accessed via a REST Endpoint, and the source rasters remain in cloud storage. Automated update scripts ensure that the services are always displaying the most recently processed data for end users to access.
In this workflow, the data remains in the cloud from start to finish. From the Sentinel-1 archive stored in the NASA Earthdata Cloud, to HyP3 processing using EC2, to analysis-ready output products stored in S3 buckets, to an Image Server hosted on an EC2 instance, the data never leaves the AWS cloud environment. Image services bridge the final gap, allowing users to interact with the data contained in the analysis-ready rasters directly in the cloud.
This presentation will discuss the benefits of analysis-ready SAR data for disaster response, describe the processing workflow, and demonstrate the functionality of image services, which allow immediate access to analysis-ready SAR datasets over large spatial and temporal extents.
The provision of geospatial information and datasets to federal public authorities in Germany is a prime duty of the Federal Agency of Cartography and Geodesy (Bundesamt für Kartographie und Geodäsie; BKG). More recently, an increased demand for very high-resolution Earth observation (VHR-EO) data and their availability in near real-time was observed among German federal users. However, knowledge about the availability of VHR-EO data to national public authorities (e.g. via Copernicus Contributing Missions) and funding for the procurement of such data are often limited. Dedicated services that allow the exploitation of the full potential and synergetic use of VHR EO are required. This article explores BKG’s current activities that aim at the development of a common access to commercial EO data for German federal users.
The needs and requirements of existing and potential federal users for satellite-based EO data (130 federal & federal research institutions) were examined by BKG in the form of a web-survey in 2020. One of the major results of the web-survey was an expressed need for a simple and readily usable access to VHR EO data and services that address different user levels. Accordingly, BKG set up a Federal Service Point of Remote Sensing (Servicestelle Fernerkundung; SF) with the aim to address many of these user needs. Its knowledge-based and data-based services are currently under preparation. SF is part of BKG’s Satellite-based Crisis and Spatial Information Services (Satellitengestützter Krisen- und Lagedienst; SKD) and further information on the SKD portfolio may be obtained from an additional abstract by Suresh et al., 2021.
As a result of their priorities and knowledge of EO data, the heterogeneity of needs among federal users necessitates a variety of data retrieval and provision mechanisms as well as different levels of assistance. Provision ranges from viewing-only to archive and tasking services for different product levels and sensors. Shared usage of EO data as enabled by a federal governmental licence together with the collection and provision of EO data accessible through a common infrastructure is regarded as key to an efficient procurement and a sustainable EO data service within the German federal administration.
With the concluded framework agreements for data acquisition and provision, the initial operations of our services will start from 2022 and will include exclusive access for BKG and German federal users to a variety of VHR EO optical and radar missions as well as technical consultation. With this new type of infrastructure, BKG is enabling digitalization and new applications in the federal administration. At the same time, BKG is committed to continuously ensure that federal users and political decisionmakers obtain the objective and reliable data they need.
We will present the current status of services of the SF including preliminary results from the initial test and evaluations conducted with selected federal users. We will also highlight the upcoming operational services scheduled to be launched from the middle of 2022.
The scope of this presentation is to provide an update on the ESA radar altimetry services portfolio for the exploitation of CryoSat-2 (CS-2) and Sentinel-3 (S-3) data from L1A (FBR) data products up to SAR/SARin L2 geophysical data products. At present, the following on-line & on-demand services compose the portfolio:
- The ESA-ESRIN SARvatore (SAR Versatile Altimetric TOolkit for Research & Exploitation) for CS-2 and S-3 services. These processor prototypes allow the users to customize the processing at L1b & L2 by setting a list of configurable options, including those not available in the operational processing chains (e.g. SAMOSA+ and ALES+ SAR retrackers).
- The TUDaBo SAR-RDSAR (TU Darmstadt – U Bonn SAR-Reduced SAR) for CS-2 and S-3 service. It allows users to generate reduced SAR, unfocused SAR & LRMC data. Several configurable L1b & L2 processing options and retrackers (BMLE3, SINC2, TALES, SINCS, SINCS OV) are available.
- The TU München ALES+ SAR for CS-2 and S-3 service. It allows users to process official L1b data and produces L2 products by applying the empirical ALES+ SAR subwaveform retracker, including a dedicated SSB solution.
- The Aresys FF-SAR (Fully-Focused SAR) for CS-2 service. Currently under validation, it will provide the capability to produce L1b products with several configurable options and with the possibility of appending the ALES+ FFSAR output to the L1b products.
In the future, these services will be extended and the following new services will be made available: the Aresys FF-SAR services for S-3 & Sentinel-6, the CLS SMAP S-3 FF-SAR processor (s-3--smap) and the ESA-ESTEC/isardSAT L1 Sentinel-6 Ground Prototype Processor.
All output data products are generated in standard netCDF format, and are therefore also compatible with the multi-mission “Broadview Radar Altimetry Toolbox” (BRAT, http://www.altimetry.info).
The SARvatore Services are being migrated from the ESA G-POD (https://gpod.eo.esa.int/) to the Altimetry Virtual Lab, a community space for simplified services access and knowledge-sharing. It will be hosted on EarthConsole (https://earthconsole.eu), a powerful EO data processing platform now also on the ESA Network of Resources. This enables SARvatore Services to remain open for worldwide scientific applications (info at altimetry.info@esa.int).
Monitoring land degradation (LD) to improve the measurement of the sustainable development goal (SDG) 15.3.1 indicator (“proportion of land that is degraded over a total land area”) is key to ensuring a more sustainable future. Achieving land degradation neutrality (LDN) has been proposed as a way to stem the loss of land resources globally. To date, LDN operationalization at the country level has remained a challenge both from a policy and science perspective. Current frameworks rely on default medium-resolution remote sensing datasets available to assess LD and cannot identify degradation patterns at a higher level of detail.
Using an approach incorporating cloud-based geospatial computing with high spatial resolution imagery (i.e. Landsat), machine learning, national-level datasets of land cover, land productivity dynamics, and soil organic carbon stocks were developed. We tested this approach on a national level over Botswana and on a regional level in Tanzania.
Using the example of Botswana, LDN and the proportion of degraded land were assessed. Between 2000 and 2015, grassland lost approximately 17% of its original extent, the highest level of loss for any land category; land productivity decline was highest in artificial surface areas (11%), whereas 36% of croplands show early signs of decline. With the use of national metrics, degraded areas were found to be 32.6% compared to 51.4% of the total land area when global default datasets were used. We further validated our results with the field data that confirmed the validity of our results based on national metrics.
In Tanzania, we adapted local datasets in interplay with high-resolution imagery to monitor the extent of LD in the semiarid Kiteto and Kongwa (KK) districts of Tanzania from 2000–2019. According to the adopted high-resolution data and methodology (AM), 16% of the area in KK districts was degraded during 2000–2015, whereas the default medium-resolution data and methodology (DM) revealed total LD on 70% of the area. Furthermore, based on the AM, overall, 27% of the land was degraded from 2000–2019. To achieve LD neutrality until 2030, spatial planning should focus on hotspot areas and implement sustainable land management practices based on these fine resolution results.
Beyond demonstrating remote sensing viability for LDN assessment, the study developed procedures for generating and validating national-level datasets that are available for Google Earth Engine users. Using these procedures, LD monitoring will be enhanced in the study areas and elsewhere since these remote sensing datasets can be updated using freely available satellite datasets.
Land degradation poses many serious challenges to ecosystems and sustainable livelihoods in Mongolia. Particularly, in eastern Mongolian steppe, an increase of human pressure, growth of extractive industries and oil exploitation, expansion of dirt roads and overgrazing combine with climate variability to produce land degradation process. Consequently, the main purpose of this study was to map and identify the area affected by and the severity of land degradation in eastern Mongolia steppe ecosystems. To map land degradation, we selected 18 potential variables based on the literature review as potentially important driving factors of land degradation: the explanatory variables of ecological, geomorphometric, environmental and anthropogenic. Most of the ecological, geomorphometric and environmental explanatory variables were derived from Sentinel-2, Landsat time series or other sources of remotely sensed data. In the literature, percent vegetation cover and aboveground biomass are two key proxies for land degradation. Consequently, both variables were predicted by training random forest regression models against 256 field vegetation samples and Sentinel 2 data which obtained good accuracies (R2=0.76-0.81). For land degradation mapping, we trained a separate random forest classification model using all potential explanatory variables and field measurements of degradation rates as response variable in eastern Mongolian steppe (n = 200). The random forest classification model obtained a reasonable overall accuracy of 74.5%. Amongst the 18 potential explanatory variables, annual soil loss, aboveground biomass, soil erodibility, NDVI trends and distance to roads were the most important variables for the estimation of current land degradation status. The land degradation map shows that most of the study area is generally affected by some degree of degradation, of which 2,472 km2 (2%) are severely degraded, 7,416 km2 (6%) are heavily degraded, 35,843 km2 (29%) are moderately degraded, and 71,687 km2 (58%) are slightly degraded. Pristine and undegraded grassland sums to an area of 6180 km2 (5%).
The United Nations Decade on Ecosystem Restoration, running from 2021 through 2030 recognised land degradation as a major problem in natural and managed ecosystems across all terrestrial biomes and agro-ecologies. Environmental shifts and the dissolution of ecosystem stability as consequences of land degradation further decrease the ecosytems’ ability to respond resiliently to climatic stressors and impacts of human activities [1]. Land degradation leads to the inability of the soil to fulfil its ecosystem services and functioning. Degraded soils neither sustain the production of food, fibre, and biofuel at desired levels, nor function as carbon sinks. They are also unable to provide for nutrient cycling, a habitat for organisms, and flood regulation. Degradation is largely induced by land-use change, and land management practices that impact elements in the environment such as soils and vegetation. Assessment and mapping must precede any implementation of policies aimed at preventing degradation and/or restoring degraded ecosystems.
With improved spatial and temporal resolution as well as big spatial data analytical tools, remotely sensed datasets are increasingly utilized in assessing land degradation. This study contributes to an aspect that is lacking in many studies, for which there is an earnest need. This is the field verification of remote sensing-based land degradation products. Developing database and maps using the Composite Land Degradation Index (CLDI), physical, biological, and chemical land degradation types and indicators were characterised and mapped [2]. Focusing on agro-pastoral landscapes in dryland contexts, maps of land degradation status and indicators were created with data from the field, physico-chemical and microbial soil properties and satellite images.
Physiographic units (PUs) were delineated as mapping units based on the intersection of soil units, geological structure, LULC and elevation. Fifteen types of PUs were identified and each type is homogenous in terms of the uniqueness of its biophysical properties, making the comparison of land degradation meaningful. Beyond assessing land degradation with only physical degradation indicators such as soil erosion, the extent and severity of 17 indicators were assessed. For example, the degree of salinization as a chemical degradation indicator was ascertained based on the level of sodicity derived from soil chemical properties. Palapye, an agro-pastoral region and biodiversity hotspot, comprising about 25 settlements in eastern Botswana was used as a case study region.
At the plot level, microbial properties were analysed in soils collected occurring under different land-use regimes. Soil quality indicators, such as biological properties, and their relationship to specific recognizable ecosystem services is an accepted approach to assess ecosystem integrity [3]. Focusing on the influence of land-use on soil microbial properties, we examined variations in bacterial diversity in terms of richness (total number of bacterial species) and abundance in three land-use regimes (bareland/saline impacted, garden or cultivated and sewage-sludge impacted). Bacterial diversity is a known driver of ecosystem services and is connected to agriculture, nutrient cycling, habitat quality, carbon sequestration. We characterized abundance and diversity and investigated how soil physico-chemical properties under different land-uses affect bacterial communities. Soil samples were collected at 0 to 15cm depth and 16S rRNA gene-based metagenomics approach [4] was used for high-resolution characterization of bacterial abundance and diversity, whereas routine laboratory procedures were used for analyzing soil physico-chemical properties, e.g., pH, electrical conductivity (a measure of soil salinity), organic matter content (OC), total phosphorus (P), cation exchange capacity (CEC). Simpson Index of Dominance was used to assess bacterial diversity and Sorenson’s coefficient for bacteria communities’ similarity assessment. We evaluated the null hypothesis that there is no difference in the selected soil physico-chemical properties under different land-use types - the independent variable and established interactions among soil properties. Analysis was at various levels, including phylum-level and class-level abundance, diversity, and distribution of bacterial populations by land use.
That soil bacterial communities are impacted by land-use is evident with five of the phyla in the order Proteobacteria > Actinobacteria > Firmicutes > Bacteroidetes > Acidobacteria dominating. With the highest OC, CEC, and clay contents, soils under sludge use had more bacterial diversity and abundance, whereas some bacterial communities were absent in other land-uses (agriculture and saline-impacted). Differences in land-use and management practices are likely responsible for varying soil properties and bacteria diversity. It is concluded that bacterial diversity was influenced by the degree of habitat disturbance caused by variation in land-use and management practices. This study contributed to the knowledge of soil bacteria and land-use influence in urbanizing regions, particularly in drylands.
The database and maps are available as a story map of land degradation http://
www.arcgis.com/apps/MapSeries/index.html?appid=47b31ecd5930432fb4efa182b30608a0. For the mapping of dominant degradation types, we created additional new symbols to the list of internationally recognized symbols of land degradation indicators. The creation of these reference datasets facilitated the validation of Land Degradation Neutrality (LDN) baseline and the Sustainable Development Goal (SDG) 15.3.1 (proportion of degraded land). The integrative and spatially explicit methods developed can be adapted for use in operationalizing Land Degradation Neutrality in all countries.
REFERENCES
[1] Webb NP, Marshall NA, Stringer LC, Reed MS, Chappell A, Herrick JE. 2017. Land degradation and climate change: building climate resilience in agriculture. Front Ecol Environ. 15(8):450–459.
[2] Brabant P. 2010. A global land degradation assessment and mapping method: a standard guideline proposal. Issue 8. France: CSFD.
[3] Yang T, Kadambot H, Liu S 2020. Cropping systems in agriculture and their impact on soil health-A review. 10.1016/j.gecco.2020.e01118
[4] Mhete et al. 2020. Soil properties influence bacterial abundance and diversity under different land-use regimes in semi-arid environments. 10.1016/j.sciaf.2019.e00246
In this study we present a novel approach to identify and extract the limits of the loess micro-depresions using the Copernicus Digital Elevation Model (GLO-30). The processing was realized using the tools from a Free and Open Source Software (FOSS) developed within the umbrella of Open Source Geospatial (OSGeo) foundation. For this approach the area of interest was the Bărăgan Plain, which is situated in south eastern part of Romania, having a top importance in the agriculture production of Romania. The lithology of the study area consists in a thick loess deposit that varies from 15-30m in the vicinity of the Danube River (south limit), shrinking towards the north limit. Above the loess deposits, fertile soils of the chernozem class have developed, currently contributing to very good agriculture productivity. Due to the loess deposits, a large number of leoss micro-depressions have emerged, especially in the southern part of the study area, with dimensions that vary from a few square meters to thousands of square meters.
From the point of view of land degradation processes, leoss micro-depressions play an important role by storing water for long periods, which can lead to settlements and saltings. Also, on the ridge of the micro-depression can appear processes of draining and ravines leading to a degradation of the lands in the area adjacent to the micro-depression. Taking into account the climatic conditions of the region that is characterized by intense winds from the north east especially in the winter, and moderate to low levels of annual precipitations but with more often extreme weather events in the last years especially in the summer (torrential rains, thunderstorms, and so on) identifying these landforms is increasingly important to combat land degradation.
In the past, identification of the leoss micro-depressions was made manually using satellite high resolution color compozite imagery or aerial imagery. Using this method, the presence of agricultural crops can make it difficult to correctly identify the landforms. Our approach eliminates some of these disadvantages, while automating much of the workflow.
The analysis was realized using r.param.scale processing tool from GRASS (Geographic Resources Analysis Support System) GIS, which is an FOSS under development since 1982. GRASS integrates a large number of processing tools for geomorphometric parameters extraction and numerous other algorithms. For this study, we used the GRASS tools integrated within QGIS Processing Toolbox. The processing was realized using the publically available Copernicus DEM (GLO-30) at a resolution of 30m. The result consisted of a raster on which a number of six landforms were identified, which was transformed into a vector format and only the relief forms that delimit the micro-depression ridge were preserved.
Drylands cover approximately 40% of the global terrestrial surface and currently support nearly one third of the world’s population. Drylands offer important ecosystem services such as providing pasture to nearly 50% of the worlds livestock and store modest amounts of carbon. However, global drylands are considered to be fragile ecosystems, mainly due to limited rainfall, and are therefore more likely to be impacted by climate change. East African drylands are prone to recurrent droughts due to the region’s high climate variability. However, the frequency and intensity of droughts in this region has increased in the past two decades due to climate change. For example, since the year 2000, the region has been experiencing droughts every two to three years, whereas before this time, droughts occurred once every five to six years. This increase in the number and intensity of droughts is likely to lead to greater impact on the ecosystem health and functioning of these drylands. This study uses Earth Observation data to evaluate the impacts of the frequent droughts in East Africa on dryland ecosystem health and functioning. Furthermore, the study evaluates whether these droughts are leading to land degradation. Specifically, the study investigates (i) how these droughts are impacting the dryland ecosystem health and functions (e.g., LAI, albedo, gross primary productivity-carbon sequestration potential, water use efficiency, and evapotranspiration) (ii) how these droughts are impacting the dryland vegetation structure and composition (iii) whether the impacts of droughts are differentiated by plant functional type and (iv)whether the droughts are accelerating land degradation. Results from this study will provide critical information needed to come up with strategies to manage the impacts of droughts both on the ecosystem and those who depend on drylands for their livelihoods.
Soil organic matter (SOM) content is an effective indicator of desertification; thus, monitoring its spatial‒temporal changes on a large scale is important for combating desertification. However, mapping SOM content in desertified land is challenging owing to the heterogeneous landscape, relatively low SOM content and vegetation coverage. Here, we modeled the SOM content in topsoil (0–20 cm) of desertified land in northern China by employing a high spatial resolution dataset and machine learning methods, with an emphasis on quarterly green and non-photosynthetic vegetation information, based on the Google Earth Engine (GEE). The results show: 1) the machine learning model performed better than the traditional multiple linear regression model (MLR) for SOM content estimation, and the Random Forest (RF) model was more accurate than the Support Vector Machine (SVM) model; 2) the quarterly information regarding green vegetation and non-photosynthetic were identified as key covariates for estimating the SOM content in desertified land, and an obvious improvement could be observed after simultaneously combining the Dead Fuel Index (DFI) and Normalized Difference Vegetation Index (NDVI) of the four quarters (R2 increased by 0.06, the root mean square error decreased by 0.05, the ratio of prediction deviation increased by 0.2, and the ratio of performance to interquartile distance increased by 0.5). In particular, the effects of the DFI in Q1 (the first quarter) and Q2 (the second quarter) on estimating low SOM content (< 1%) were identified; finally, a timely (2019) and high spatial resolution (30 m) SOM content map for the desertified land in northern China was drawn which shows obvious advantages over existing SOM products, thus providing key data support for monitoring and combating desertification.
Soils are a key natural resource to realise several UN Sustainable Development Goals. Consistent global soil information is required to underpin a large range of assessments, such as soil and land degradation, climate change mitigation and adaptation, food security, sustainable land management, and environmental conservation. Recent Earth Observation products became available to be integrated in soil and land monitoring. With the development of digital soil mapping, the use of quantitative information on soil properties became more relevant. Nowadays there are regional, national and global maps of most basic soil properties. Yet most applications of soil data require information on soil functions. We developed a framework addressing the mapping of soil functions at global scale, using erodibility and soil carbon sequestration potential as examples. The soil information was provided by SoilGrids, global soil property maps at six standard depths and 250 m resolution. We used simplified models to derive soil functions from basic soil properties applicable in different pedo-climatic regions. To support sustainable land management, we provide an indication of areas of low/high risk of soil degradation. The use of Earth Observation product is key to this assessment, both to produce soil properties maps and to monitor land use. The modelling framework offers great flexibility and may be applied to a diverse set of models to generate soil information products tailored to specific applications to support soil security. We highlight some of the challenges of assessing soil functions at global scale. Finally, we discuss the pros and cons of using diagnostic properties and horizons in assessments of ecosystem services together with quantitative soil properties.
Soil sealing – the process of covering ground with impermeable material – is one of the main causes of land degradation in the European Union. Thus, reducing the ongoing human-induced (artificial) soil sealing and to achieve land degradation neutrality is of high community interest, since soil sealing threatens the availability of fertile soils and groundwater reservoirs for future generations. For example, in Austria 11.5 hectare are sealed every day (three years average; analysis based on cadastral data and including methodological time lags). Based on an Austrian political decision, this massive land consumption needs a reduction to a number of 2.5 hectare per day until the year 2030.
To meet the goals for a reduction in land degradation, a repeated and spatially explicit monitoring of soil sealing is crucial for reporting, decision support, as well as spatial management and planning. However, a standardized approach for a systematic up-to-date area-wide assessment of soil sealing and related indicators for reporting are still not available. Despite the fact, there is a clear international mandate for reliable and repeatable national level identification and monitoring of land degradation (UN SDG 15.3), yet existing methods do not serve national requirements, or are not mature enough to inform decision-makers or shape policy.
Earth observation (EO) satellite data are a valuable data source for monitoring approaches, since large areas can be regularly covered with a ground sampling distance (GSD) up to decimetres (cf. very high resolution (VHR) satellite imagery). Compared to VHR satellite imagery the Copernicus Sentinel-2 data offers a lower spatial resolution (up to 10 m), but are cost-free and captured regularly with an acquisition frequency of 3-5 days in Austria. The dense time series of observations, such as provided by the Sentinel-2 mission, are a key requirement for the production of reliable up-to-date information products enabling the continuous monitoring of land surface changes.
The recently started project “Soil sealing identification and monitoring system” (SIMS; funded by the Austrian Promotion Agency; sims.sen2cube.at) aims to evaluate the suitability of Sentinel-2-based soil sealing detection using the worldwide unique semantic EO data cube (Sen2Cube.at) and how it can be improved for the usage in reporting, decision support, as well as spatial management and planning on national- and provincial level. The goals of the project are the following. 1) Enable Austrian public authorities to use free and open multi-temporal Copernicus Sentinel-2 imagery as analysis-ready-data and to integrate it into their daily workflows. 2) Detect and monitor sealed surfaces on demand by conducting fully transparent, reproducible, transferable and scale-able ad-hoc semantic data-cube queries without the need for training samples within a web-browser-based graphical interface. 3) Facilitate non EO-experts to develop and conduct custom multi-temporal (inter- and intra-annual) EO-based analysis using semantic enriched EO data (e.g. categories of vegetation, sealed surfaces, water). 4) Integrate dynamic Sentinel-2 multi-temporal derivatives and information products for enriching existing VHR monitoring approaches. Thus, addressing the need for very high spatial resolution information for reporting, decision support, as well as spatial management and planning.
Initial promising products were developed or are on the roadmap for future implementation, which involve public authorities to include their specific needs. The products are 1) static products showing the current status, including time-series informed land cover, 2) hot-spot layers that indicate areas of permanent changes (sealing or de-sealing in between years or shorter time periods) that can be queried, analysed or requested on-the-fly by the authorities themselves using a tailored interface. 3) Data integration of different kind of landscape partition (e.g. agricultural field parcels, cadastral data, zoning plans) is provided for location centred change / no-change queries, thus 4) highlighting areas with the need for updating the thematic data sets. However, the detection sensitivity of developed semantic models in respect to the spatial resolution of Sentinel 2 and the time-series data quality need to be assessed, fine-tuned and resulting limitations communicated to the users.
Soil continues to be sealed and advances in data acquisition (e.g. Copernicus) and infrastructure technology (e.g. cloud-based systems, EO data cubes) have changed what can be regularly identified and monitored, but the analytical workflows and daily operational workflows for assessing soil sealing have not yet adapted to these advances. The SIMS project with the aim of a prototypical implementation serves as an initial stepping-stone addressing the discussed issues and bringing national and provincial authorities closer to up-to-date EO-based solutions for monitoring soil sealing tailored to their needs.
With the growing availability of earth observations and data sets, computing resources, open access tools and user-friendly interfaces that provide unique opportunities to improve assessment and monitoring of land degradation and Sustainable Land Management (SLM), there is also a growing need for better understanding of how these various methods and tools can be linked, verified, improved and applied. Countries are challenged to report and monitor progress towards Land Degradation Neutrality (LDN) at the national level, using at minimum the three LDN (sub) indicators: land cover change, land productivity dynamics and soil organic carbon change. However, to achieve LDN, interventions are implemented at the field level, where planners and decision makers need to base land management decisions on validated indicators locally, building on previous experiences in good practices. To ensure evidence-based decisions, improved insight on what indicators to choose at different scales, and how to link them to remote sensing analysis remains a challenge. In this work we present the results of the implementation of an integrated workflow to verify land degradation trends from field to national scales in Colombia. The country presents a great diversity of ecosystems, soil types and bioclimatic zones, which include arid and semi-arid zones, as well as mountain ecosystems and rainforests, providing an ideal globally-applicable framework for field verification of tools and their results, including Trends.Earth, WOCAT SLM database, LandPKS, and assessment results from national partners. The framework was tested at the field scale in different agro-ecological zones in Colombia, with national and international experts, providing case studies for scaling the tools and workflow to other countries. Integration of national protocols for assessing different types of land degradation (e.g. soil erosion and salinization), as well as collection of land condition and management information on the ground were fundamental to achieve coherence among different assessments across scales. The results obtained in a wide range of agro-ecological zones and land management practices provide key insights for other countries into the assessment and reporting of LDN.
Cocoa growing is a dominant activity in humid West Africa, which supports the economic and social life of millions of smallholder farmers. Since cocoa is mainly grown in pure stands, it is also the main driver of deforestation, encroachment in protected areas with huge environmental impacts (carbon emissions, biodiversity loss and soil degradation). Thus, cocoa agroforestry systems have been promoted to mitigate these impacts, which requires their clear and accurate delineation in the landscape to support a valid monitoring of these systems. Therefore, the aim of this research is to model the spatial distribution of uncertainties on the classification of agroforestry systems with remote sensing data. The study was carried in the south of Côte d’Ivoire, in a region close to the Taï National Park. The analysis was conducted in three steps (i) image classification was carried out by means of texture parameters and vegetation indices from Sentinel-1 and -2 data to train a random forest algorithm to generate a classified map with the associated probability maps. (ii) Shannon entropy was calculated based on probability maps, followed by generating error maps with different thresholds including 0.2, 0.3, 0.4 and 0.5 and a threshold value based on field plots. The error maps were used to remove pixels associated with high level of error in the classified map. Then, (iii) the generated error maps were analysed using a Geographically Weighted Regression (GWR) to check for non-stationarity. From the results, an accuracy = 0.94 and kappa =0.92 was obtained from the classification map, and the cocoa class was associated with the lowest user (0.91) and producer’s accuracy (0.88) compared to the other classes. The entropy analysis showed that a small threshold value will detect larger error on the map (58.3% for thr =0.2) corresponding to type I error, whereas a larger threshold will result to smaller error on the map (38.5% for thr = 0.5) corresponding to type II error. The optimal value was derived from the field plots. The analysis showed that there was no evidence of spatial autocorrelation except for smaller entropy threshold value (thr = 0.2), where the relationship between features were varying across the map, with some global and some local features. The approach was able to remove misclassified pixels in the agroforestry systems under investigation, especially in cocoa plantations. However, the outcomes are affected by the results from the classification since it is mainly a post classification methodology.
The analysis of vegetation structure in savanna ecosystems is an important task to assess the amount of above-ground biomass (AGB) and its role as part of the global carbon cycle (Naidoo 2015). Further, the use of digital elevation models (DEMs) has proven to be crucial in numerous studies related to savanna ecosystem research. Here, height-related parameters are a valuable source of information as they are fundamental variables for a large variety of study fields such as ecology, hydrology, agriculture, geology, pedology, or geomorphology (Pulighe & Fava 2013). Within the Kruger National Park (KNP), South Africa freely available reference information is scarce, especially when targeting reliable height data at (very) high spatial resolution. However, these data sets are of exceptional value for studies being conducted in this area and insufficient spatial resolution of the chosen input data is often considered to be a limiting factor when conducting local to regional scale ecosystem analysis.
The elevation models and orthorectified imagery created in this study represent the first wall-to-wall digital elevation data sets produced for the Kruger National Park (KNP), South Africa, at very high spatial resolution. Using colour-infrared (CIR) aerial imagery from the archives of the Chief Directorate: National Geo-spatial Information (CDNGI), Department of Agriculture, Land Reform and Rural Development (DALRRD) aerial acquisition programme, we created digital surface models (DSMs), digital terrain models (DTMs) and CIR orthomosaics covering the entire KNP with a nominal ground sampling distance of 0.25 m. Elevation information was derived using state-of-the-art stereo matching algorithms that utilised semi-global matching (SGM) as a cost aggregation function throughout the image pairing, using the Enterprise software from CATALYST.
To validate the digital terrain models, we utilized differential GNSS measurements (Baade & Schmullius 2016). The accuracy of the digital surface model was evaluated based on very high-resolution elevation models that were calculated from drone imagery which were acquired across the KNP in 2020 by a team from Harvard University led by Prof. Andrew Davies. Validation results indicate very good correspondence with various types of in situ data. The final products were validated against reference data and showed excellent agreement with R² values of 0.99. Further, the validation of the DTM and DSM revealed median absolute vertical height error (LE90) across all sites of 1.02 m and 2.58 m, respectively. The orthomosaics were validated with in situ ground control points (GCPs) exhibiting a horizontal Circular Probable Error (CPE) of 1.37 m.
The presented data sets represent KNP’s first wall-to-wall sub-meter resolution height models which may serve as a unique reference data set for studies being carried out in this complex ecosystem and will be made freely available to everyone interested in the pixel posting of 25 cm to foster more scientific studies in the African science community and beyond.
In a globalized context increasingly impacted by climate change, demographic studies would gain from taking environmental data into account and be carried out at the transnational level. However, this is not always possible in Sub-Saharan Africa, as matching harmonized demographic and environmental data are seldom available. The large amount of data regularly acquired since 2015 (in 2019 only, Sentinel satellites from the European Space Agency produced 7.54 PiB of open-access data) are an opportunity to produce relevant standardized indicators at the global scale. Several indicators have been developed to help understanding geographical realities in a consistent (i.e., not location dependent) manner. Among them, local climate zones (LCZ) have been proposed by WUDAPT (World Urban Database and Access Portal Tools) to systematically label urban areas [2]. Their goal is to provide a map of the world following this legend, in open access, that can later be used by researchers for a wide range of studies. This data has been used to understand energy usage [1], climate [3] or geoscience modeling [10] or land consumption [5]. An important amount of work has been dedicated in the recent years to the automatic generation of such data, from sensors such as Landsat 8 or Sentinel 2. In a research competition organized by the IEEE IADF, several methods have been proposed to map LCZ from Landsat, Sentinel 2 and OpenStreetMap data [11]. Another recent study focused on the usage of Convolutional Neural Networks (CNNs) to tackle the task of automatically mapping LCZ using deep learning [7] and a large-scale benchmark dataset was proposed in [12], with a baseline of an attention-based CNN. However, these works mostly focused on developed urban areas. For instance, the challenge of [11] targeted Berlin, Hong Kong, Paris, Rome, São Paulo, Amsterdam, Chicago, Madrid, and Xi’An. This is problematic, as developed cities are generally well mapped through governmental censuses, and that spatial generalization of machine learning based methods is a challenge [6]. It is therefore necessary to develop adapted methods for developing areas [9]. In this work, we explore different methods to predict LCZ from Sentinel-2 data. The originality of the approach is to train a convolutional network-based model (ResNet34 [4]) on clusters of data representing similar morphological features as our target city (Ouagadougou, Burkina Faso). To select relevant cities as training data, we use the classification proposed in [8] and intersect relevant cities with those represented in the large-scale LCZ dataset [12]. As such, our dataset is composed of areas covering Karachi an Islamabad, Pakistan, Cairo, Egypt and Hong-Kong, China.
Preliminary results show that ResNet34 [4] achieves good performance when training it on images representing similar morphological features (Overall accuracy: 94%). We perform LCZ classification on Ouagadougou with this model. It exhibits two main findings:
• Resnet34 [4] can be generalized to an unseen area which has similar morphological features to the training areas.
• Some of LCZ classifications are dependent to seasonal variations. In particular, we observed that some classes which do not contain vegetation (i.e., expected to be invariant to seasons) have not been similarly predicted when looking at several seasons. We attribute this phenomenon to the lack of seasonal changes within the training dataset, which does not consider weather fluctuations.
Figure 1 shows LCZ classifications of Ouagadougou according to the seasons. Results are globally consistent, and seasonal misclassifications are more frequent when looking at the outskirts of the city. Red classes are buildings, so are expected to be invariant to the seasons. As well as for seasons, this result highlights the necessity to generate data for rural areas, as training on urban areas does not appear to generalize well when inferring on rural areas.
To study the correlation between population data and LCZ, we cross-referenced Ouagadougou’s population density data with our classification results in figure 2 and investigate the population density per LCZ class. As expected, class with compact mid-rise buildings are correlated with a high population density. Compact low-rise buildings are associated to a lower population density, and natural areas are predicted as not populated areas, which validate the results of our model.
References:
[1] Paul John Alexander, Gerald Mills, and Rowan Fealy. Using lcz data to run an urban energy balance model. Urban Climate, 13:14–37, 2015.
[2] Benjamin Bechtel, Paul J Alexander, Jürgen Böhner, Jason Ching, Olaf Conrad, Johannes Feddema, Gerald Mills, Linda See, and Iain Stewart. Mapping local climate zones for a worldwide database of the form and function of cities. ISPRS International Journal of Geo-Information, 4(1):199–219, 2015.2
[3] Jan Geletič, Michal Lehnert, Petr Dobrovoln`y, and Maja Žuvela-Aloise. Spatial modelling of summerclimate indices based on local climate zones: expected changes in the future climate of brno, czech republic. Climatic Change, 152(3-4):487–502, 2019.
[4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[5] Jingliang Hu, Yuanyuan Wang, Hannes Taubenböck, and Xiao Xiang Zhu. Land consumption in cities: A comparative study across the globe. Cities, 113:103163, 2021.
[6] Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat, and Pierre Alliez. Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. In2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 3226–3229. IEEE, 2017.
[7] Chunping Qiu, Michael Schmitt, Lichao Mou, Pedram Ghamisi, and Xiao Xiang Zhu. Feature importance analysis for local climate zone classification using a residual convolutional neural network with multi-source datasets. Remote Sensing, 10(10):1572, 2018.
[8] Hannes Taubenböck, Henri Debray, Chunping Qiu, Michael Schmitt, Yuanyuan Wang, and Xiao Xiang Zhu.Seven city types representing morphologic configurations of cities across the globe. Cities, 105:102814, 2020.
[9] John E Vargas-Muñoz, Sylvain Lobry, Alexandre X Falcão, and Devis Tuia. Correcting rural building annotations in openstreetmap using convolutional neural networks. ISPRS journal of photogrammetry and remote sensing, 147:283–293, 2019.
[10] Hendrik Wouters, Matthias Demuzere, Ulrich Blahak, Krzysztof Fortuniak, Bino Maiheu, Johan Camps,Daniël Tielemans, and Nicole Van Lipzig. The efficient urban canopy dependency parametrization (sury)v1. 0 for atmospheric modelling: description and application with the cosmo-clm model for a Belgian summer. Geoscientific Model Development, 9(9):3027–3054, 2016.
[11] Naoto Yokoya, Pedram Ghamisi, Junshi Xia, Sergey Sukhanov, Roel Heremans, Ivan Tankoyeu, BenjaminBechtel, Bertrand Le Saux, Gabriele Moser, and Devis Tuia. Open data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS data fusion contest. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(5):1363–1377, 2018.
[12] Xiao Xiang Zhu, Jingliang Hu, Chunping Qiu, Yilei Shi, Jian Kang, Lichao Mou, Hossein Bagheri, Matthias Häberle, Yuan sheng Hua, Rong Huang, et al. So2sat lcz42: A benchmark dataset for global local climate zones classification. arXiv preprint arXiv:1912.12171, 2019
Each year, the total number of forced displaced persons (refugees, asylum seekers and internally displaced persons) continues to rise. Both long-term humanitarian planning and sustainable camp management require reliable and comprehensive information that is continuously provided during a crisis. Earth Observation (EO) solutions, including multi-mission approaches, provide essential support to operational services, responding to stakeholder needs.
Two EO-based projects form the basis of the work presented here: (I) ARICA aims to investigate the interaction between the environment and camp inhabitants using a time series of HR/VHR satellite imagery and in-depth interviews with both inhabitants and government and NGO representatives living/working in camps located in Africa (Tanzania, Kenya, South Sudan), the Middle East (Iraq) and Asia (Bangladesh); (II) EOTiST is focused on an evaluation of ecosystem services in selected areas using satellite data.
The main focus of this presentation will be on the investigation of changes in the land cover that occurred within the vicinity of Mtendeli refugee camp. The region of interest is located in
North-western Tanzania, in the Kigoma region of the Kakonko district (30°53'15 "E and 3°25'36 "S). In order to analyse the changes a set of yearly land cover classifications is performed for the period 2015-2021. The classification methodology is based on S2GLC approach (Malinowski et al., 2020) which implies computations on up to twenty, least cloudy Sentinel-2 images for each year. This approach uses a random forest classifier and afterwards combines the results from individual images in the procedure called aggregation. Additionally for further result improvement, synthetic aperture radar data from Sentinel-1 is incorporated as four quarterly averaged backscatter layers resolving the seasonal variability of the vegetation. Training dataset for the classification is created using the Copernicus Global Land Cover database, the utilization of spectral indices and manually selected training data. Evaluation of the results will be supported with the ground truth data collected during the field trip to the site.
According to the obtained results the most significant change is deforestation and the transformation of wild and unmanaged areas into a more organised landscape, and agricultural areas. Hopefully with the joint effort of socio-geographical and geospatial analysis it will be possible to establish whether the environmental changes were caused mostly by the presence of refugees or other factors as well. Especially that analysed period includes the moment before the establishment of the camp.
The aim of our research complemented by further studies, i.e., the aforementioned high-resolution data-driven analyses, interdisciplinary analyses incorporating in-depth interviews, and sensitivity analysis of the classification model will help to develop an approach that provides complex and reliable results relevant to stakeholders needs.
The obtained results are going to be published on an online geoplatform and will be available for students, researchers and stakeholders as a reference point for any activity affiliated with refugee/IDP camp's inhabitants interaction. This will hopefully result in recommendations for managing camps in the future.
Research performed under ARICA project: NOR/POLNOR/ARICA/0022/2019-00, Norway Grants, POLNOR2019, co-financed from the State budget, Applied Research.
The European Union’s Horizon 2020 research and innovation programme under EOTIST project, grant agreement No 952111
References:
Malinowski, R., Lewiński, S., Rybicki, M., Gromny, E., Jenerowicz, M., Krupiński, M., Nowakowski, A., Wojtkowski, C., Krupiński, M., Krätzschmar, E., & Schauer, P. (2020). Automated production of a land cover/use map of europe based on sentinel‐2 imagery. Remote Sensing, 12(21), 1–27. https://doi.org/10.3390/rs12213523
Current advances in Earth observation facilitate operational mapping of cropland extent, crop types, field sizes, and other land management parameters in consolidated agricultural systems of many world regions. Contrastingly, such mapping approaches in smallholder agricultural systems are still challenged by three key system characteristics. First, the spatially fragmented nature of smallholder systems, often featuring field sizes below one hectare, irregular shapes, and lack of clear field boundaries, require very high spatial detail. Second, a diverse set of cultivation practices and a large gradient of land management opportunities create a heterogeneous picture in terms of cropland and land use intensity. These range from extensive shifting cultivation practices in agricultural frontiers, to labor-intensive household farming on constrained land extents and no access to inputs, or medium-size farmers with potential access to mechanization, inputs, or irrigation infrastructure. Third, the swift dynamics of croplands in smallholder systems, where cultivated lands can be left fallow after only a few years of cultivation and regrowth of herbaceous and woody vegetation occurs rapidly, require timely mapping approaches.
Recent developments in the Earth observation domain open novel opportunities to overcome these challenges and thus to provide spatially precise, thematically detailed, and timely information on smallholder agricultural systems, which can facilitate decision-making across the globe. First, Norway’s International Climate And Forests Initiative (NICFI) Imagery Program recently granted public access to bi-annual (December 2015 – August 2020) and monthly (September 2020 and onwards) mosaics of PlanetScope surface reflectance data at ~5m spatial resolution covering the world´s tropical belt. This unprecedented initiative provides a dataset with the potential to overcome the key challenges in mapping smallholder agricultural systems. Second, cloud-computing platforms such as Google Earth Engine offer the processing infrastructure to analyze such large datasets at no cost for research applications. The provision of the NICFI PlanetScope data in Google Earth Engine opens huge opportunities for generating timely and spatially detailed maps of agricultural systems across large areas at a low cost.
This work explores these emerging opportunities by investigating the use of a NICFI PlanetScope mosaic time series for mapping agricultural systems across four provinces in Northern Mozambique (Niassa, Zambezia, Nampula, and Cabo Delgado). The key objectives are 1) to accurately map active cropland extent for the growing season 2020/2021 across the entire study region (~400,000km²), 2) to compare the resulting cropland extent map with a set of currently available land cover products for the region, 3) to discuss opportunities, strengths, and caveats of the available NICFI mosaics for large area cropland mapping in smallholder systems, and 4) to locally assess a set of land use intensity proxies for mechanization and irrigation, by integrating field-based reference data to further disentangle the complex characteristics of smallholder agricultural systems in Sub-Saharan Africa.
Understanding forest disturbance patterns as a response to precipitation seasonality assists the study of forest disturbance under the changing climate. This is particularly important for the African rainforest, where various climate models have projected increasing drought periods and frequencies and change in dry seasons. Most studies assessing the relationship between precipitation and forest disturbance depend on annual forest disturbance data derived from optical remote sensing imagery, or were conducted on the averaged data across a large region, neglecting important spatial variation in precipitation and forest disturbance. Relying on annual information does not allow for a detailed assessment on forest disturbance seasonality and on how this seasonality is affected by precipitation seasonality. Additionally, persistent cloud coverage in the African tropics often decreases the availability of optical satellite imagery, resulting in omission errors or strongly delayed detection of forest disturbances. The missing temporal detail and potential delays in detecting forest disturbances are a research gap that can be tackled by generated radar-based forest disturbance maps. Temporally dense and spatially detailed forest disturbance information derived from cloud-penetrating Copernicus Sentinel-1 radar satellites, which for the first time provide the level of temporal detail that enables the investigation of the inter-annual relationships between precipitation and forest disturbances in the African rainforests.
In this study, we combine monthly precipitation and monthly forest disturbance time series to assess how forest disturbances respond to precipitation seasonality in the African rainforest, and to what extent accessibility will affect this relationship. We applied cross-correlation functions on monthly precipitation and forest disturbance time series for 2019 and 2020 at a 0.5° grid cell level. We used the magnitude of the correlation and time lag to assess the inter-annual relationship between precipitation and forest disturbance, and introduced forest edge-interior ratio and travel time to cities as accessibility proxies to explain the spatial variation of the relationship.
Results revealed that a significant negative correlation between forest disturbance and precipitation dominates (79%) the study region. The majority of those significantly negative correlated grid cells (78%) have a time shift within one month, meaning that forest disturbance peaks 1 month before or after the driest month(s) of the year. We found that significant negatively correlated grid cells appear on average closer to cities with overall smaller variations in travel time to cities (400 ± 274 min) compared to non-significant (556 ± 446 min) and significant positively (797 ± 634 min) correlated grid cells. Stronger negative relationships generally occurred further away from the edges of the intact rainforest and from the equator, in forests that were remote and less fragmented than in regions with a weaker negative relationship. A strong negative relationship implies that forest disturbance seasonality is more correlated with precipitation seasonality, and forest disturbance activities are more likely to be exclusively carried out in the drier months. The findings suggest that less accessible areas are likely be more influenced by precipitation change. Few areas (2%) showed a significant positive correlation, mainly resulting from disturbances caused by natural phenomena such as flooding. Areas with a non-significant correlation occupied 19% of the land, mainly located near the equator or at the edge of the forest, caused by a complex driving forces for forest disturbance, or less seasonality in precipitation or forest disturbances. Moreover, our results suggest a clear spatial distribution of the time lag between the monthly precipitation and forest disturbance time series, which might link to certain forest disturbance processes in the African rainforest.
The analysis of the inter-annual relationship between precipitation and forest disturbance leads the way forward to understand the complex mechanisms that underlie forest loss and climate change. This information will provide insights for forest management in African rainforest to predict when and where the hotspot or high-risk areas will be under the changing climate.
Keywords: Precipitation seasonality, forest disturbance seasonality, Sentinel-1, Sub-Saharan Africa, rainforest
Reference:
Gou et al., Inter-annual relationship between precipitation and forest disturbance in the African rainforest. (in prep).
Since 2009, the Democratic Republic of the Congo (DRC) is engaged in the Reducing Emissions from Deforestation and Forest Degradation (REDD+) process. The implementation of its National Forest Reference Emission Level (FREL) and National Forest Monitoring System (SNSF) revealed the challenges in estimating activity data and related emissions on a regular basis. Developing robust, low latency methods for national and sub-national jurisdictional assessments of forest carbon is needed to make possible financial compensations from CO2 emissions reduction programs.
The quantification and spatial-temporal monitoring of national or sub-national forest-related dynamics largely rely on the use of remotely sensed data. For more than a decade, Landsat image archive has been fully and freely available and accessible to anyone in developing and evaluating methods for forest monitoring, including the first global high-resolution map of annual tree cover change by Hansen et al. 2013. Such data can facilitate activity data quantification given its long historical record. The DRC national or sub-national REDD+ strategy has been constructed around pre-defined intervals for baseline, intermediate and performance periods that may be changed over time, resulting in the need for a inputs such as Landsat to facilitate emissions estimation.
Here, we present a 20-year baselines in support of activity data and related emissions estimations to the DRC, prototyped at sub-national jurisdictional level in the province of Mai-Ndombe (DRC) and Likouala-Sangha (Republic of Congo). The use of dense time series of stratified random samples to quantify forest-related dynamics takes advantage of the fully available Landsat data and provides more robust - and definitive - estimates with the flexibility for adjustment. The methodology uses an unbiased estimator of known uncertainty and rigorously applies SOPs and QA/QC procedures by a team of image interpretation experts to assess spectrally consistent reference data in estimating activity data and related emissions for the defined baseline, intermediate and performance periods. A stratified random sample based on existing forest type, extent, and change maps and a sample of per stratum pixels is interpreted on an annual basis using detailed reference data to estimate areas of forest change classes of interest.
To ensure an adequate transfer of capacity and guarantee the national ownership of the methods and results, experts from DR Congo are fully engaged in the process. They contributed to the protocol of interpretation and collection of reference data as well as to the QA/QC procedure. All have been trained in the preparation of the data and maps for stratification, the construction of the stratification framework and the generation of the statistical estimates, in collaboration with USFS (US Forest Services) and OSFAC (Observatoire Satellital des Forêts d’Afrique Centrale) under the USAID-CARPE program.
Key words: Democratic Republic of the Congo, REDD+, time series, Landsat, activity data, emissions, stratified random sample, forest.
Digital Earth Africa (DE Africa) operates a digital infrastructure in Africa that aims to provide free and open access to Earth observation (EO) data and services to all and build capacity across Africa to use EO based insights to address sustainable development challenges and empower country-level climate action.
DE Africa uses cloud-optimized analysis-ready datasets, including Sentinel-2, Sentinel-1 and Landsat, to generate operational services efficiently for the entire continent. By early 2022, DE Africa will have four continental services providing decision ready information, including dynamic water extent (Water Observations from Space), cropland extent, Fractional Cover and monthly NDVI Anomaly. In addition, DE Africa’s GeoMAD service provides annual or semiannual cloud free surface reflectance and variability measures that can be used for visualization, land cover mapping and change detection.
DE Africa is driven by the needs of our users. With a Program Management Office based in South Africa, we work with six implementing Partners that reach more than 40 countries across Africa. All DE Africa services are co-developed with partners to ensure they are fit for purpose and can be used to support decision making.
Capacity building is a critical component of our program and we are building an active community across Africa through training and providing ongoing support to our users. Our free, cloud-based analysis environment allows anyone to explore, learn and develop prototype applications using EO data. As of November 2021, more than 1,300 users have registered to use our platform and more than 200 users have been trained and certified. With the launch of our new bi-lingual training platform and Help Desk, we expect to further grow our diverse and engaged user community. In 2022, more thematic focused training modules will be built with our partners.
By providing easy access to data and insights, DE Africa aims to drive innovation across sectors. We have consulted over 40 companies across Africa to understand the barriers, opportunities and needs of the private sector. Two companies have won the DE Africa Innovation Challenge, an opportunity to take part in a 3-month incubator program that completes early 2022. DE Africa is also supporting the 2021 Africa EO Challenge.
By measuring and monitoring changes to the natural environment, including coastal erosion and inundation, degradation of water quality in rivers and lakes, monitoring of grasslands, croplands and forest cover, DE Africa enables insight-driven action on a number of fronts.
In this presentation, we will provide an overview of the program, the broad range of activities we are undertaking and the opportunities we would like to explore with more partners across Africa and globally. In particular, we will highlight DE Africa’s unique proposition as a tool to empower decision makers across Africa to take action against climate change.
The African Continent is often seen as the continent of potential. This unique and special region is known to skip steps in development and leapfrog ahead of some of its counterparts.
The area of technology is just such an area. Africa still has minimal fixed line telephone infrastructure, space-based technology is emerging to leverage social economic benefits to the African countries. Many African Countries have had Space ambitions for decades, capacity building in the space sector is indeed one of the main motivations of their national space program.
Countries across Africa have been using satellite imagery for many years, but now, with the speed of development of technology and specifically the Internet of Things (IoT), some countries are positioning themselves to be ready for this next stage. Without the financial outlay of buying and launching Satellites, data from existing constellations can be directly accessed. Earth observation satellites collect optical/SAR imagery while IoT satellites collect ground terminal data. The IoT data from ground terminals can be updated up to every 30mins while EO satellite can be updated every day. The combined data of EO and IoT data can be widely used in many different scenarios.
Case in point is Ethiopian Space Science and Technology Institute (ESSTI) who procured a Ground Receiving Station (GRS) in 2019 through a consortium led by HEAD and its partner China Centre for Resources Satellite Data and Applications (CRESDA), as part of their drive to Empower AFRICA. The main objectives of ESSTI to procure this GRS are to fulfill the national needs in remote sensing imagery as well as capacity building of the national space sector. Through this 3 years project started in 2020, ESSTI has been provided training in remote sensing data processing, acquiring knowledge in space-based application as well as skills in operation a GRS.
The inauguration of this ground station in May 2021 is seen by ESSTI and the Ethiopian government as a significant milestone which establishes ESSTI to be the National Geospatial Hub for meeting the national demands in satellite imagery for various applications for agriculture, national security, urban planning, water management as well as cadastral mapping. This GRS is also the first commercial GRS installed in the Sub-Sahara Africa having the capability to receive both optical and radar Very High-Resolution (VHR) imagery collected by Chinese commercial & civilian satellites.
This Abstract will show how Ethiopia took advantage of their new GRS in the national capacity building, how having the ability to download imagery on demand can empower Ethiopia in quick decision making for emergency situations like the ongoing war in the region and creates a basis for future growth in the geospatial sector. Having a strong geospatial basis will also help address some of the other challenges faced by Ethiopia like droughts and floods, food shortages can be addressed through monitoring of agricultural lands and provide data on which to act in putting plans in place for measures to combat these phenomena. We live in a time of great challenges with climate change, the COVID pandemic and instability in many regions of the world. Having access to information that can help countries and organizations prepare, plan and monitor is crucial.
The Grand Saloum extends from the Saloum estuary (in Senegal) and its borders to the northern limits of the Gambia River. It is located between 13°23'20'' and 14°14'10'' North latitudes and 16°00'00'' and 15°51'40'' West longitudes. It is characterized by a vast coastal plain cut by a dense hydrographic network and populated by mangrove plant formations. This region is dominated by the Saloum estuary which shelters particular littoral formations of mangrove, drained by a multitude of bolongs with almost no freshwater input due to rainfall variability.
The objective of this study is to analyze the spatiotemporal dynamics of its mangrove ecosystems from 1986 to 2020. The methodology is based on the exploitation of Landsat satellite images using Machine Learning technic from the Google Earth Engine platform.
To better understand the temporal dynamics of land use in the area, the Average Annual Spatial Expansion Rate (AASER) was calculated. The formula used is one of the formulas applied by [22] cited by [23] whose variable considered is the surface area (S). The formula for calculating this index is:
T=[ln(S2)-ln(S1)]/(t*ln(e))*100 (1)
Where S1 and S2 corresponding respectively to the area of a land use category at date t1 and t2, t is the number of years of change between t1 and t2, e is the base of neperian logarithms (e =2.71828).
To characterize the state of the mangrove, two vegetation indices (NDVI and VCI) are used. For each date, the NDVI is calculated to evaluate the spatial distribution of the mangrove density. Indeed, this index provides information on the density of the vegetation and its ability to absorb sunlight and therefore its condition. It varies from 0 to 1 for surfaces covered with vegetation such as the mangrove. An NDVI close to 1 corresponds to dense vegetation in good condition. However, the closer the index is to 0, the less vegetation there is or its condition is strongly degraded.
The ICV, calculated from the dry season images (January - June) of the entire series from 1988 to 2020, is an index that allows the analysis of the temporal dynamics of the mangrove [24].
CHIRPS (Climate Hazards group Infrared Precipitation with Stations) rainfall data are obtained from 0.05° x 0.05° spatial resolution satellite data validated with in situ station data. This has allowed the creation of global precipitation time series over 40 years [26].
The results showed an extension of mangrove areas in the Gambian region where the surface area increased from 7.16% in 1988 to 8.9% in 2020. In the Senegalese area, this evolution was manifested by an increase of 1.8% between 1988 and 2000 and a stability between 2000 and 2020. The detection of changes showed an important development of mangrove along the Saloum during the first decade and a strong growth in the Gambian part from the 2000s. The vegetation index showed a regeneration of the mangrove between 2000 and 2020.
The result shows the current level of degradation/regeneration compared to its average level over the study period (1988-2020). On the one hand, the peripheries of the reserves constitute the areas where the level of mangrove is low with an VCI below 0.5 (50%). On the other hand, there are pockets where the mangrove remains weak, notably south of Bétenti up to the PNDS at the Gambian border and in the Kumadi estuary in Gambia. The mangrove along the banks of the channels of the Saloum Delta is in a stable state even if weaknesses are visible in some places.
We note that the overall average level of the mangrove in 2020 is good in comparison to its average level between 1988 and 2020 with, however, parts where the degradation is very marked. This analysis has made it possible to delimit a certain number of buffer sites around the conservation areas for restoration actions.
The temporal dynamics of the mangrove is strongly correlated with the evolution of rainfall.
The magnitude, distribution, and dynamics of forest above-ground biomass (AGB) are still poorly constrained (mostly in the tropics) and were demonstrated to have a significant impact in the land-use change component of the global carbon budget. Developing countries are currently engaging with climate change mitigation activities (e.g., REDD+), but most have strong limitations when reporting carbon emissions in the land use change sector, and would benefit greatly from having robust, spatially explicit estimates of forest AGB and their dynamics. L-band Synthetic Aperture Radar (SAR) data are sensitive to forest structure because of the ability of L-band microwaves to penetrate the canopy, especially in savannas characterised by lower biomass densities. Furthermore, the recent availability of spaceborne lidar observations dedicated to measuring forest structure is likely to further improve our understanding of the spatial distribution of forest AGB and its dynamics.
In this study we assess the ability of combining lidar, radar and optical data to improve the estimation of forest AGB in Mozambique, a country in Southern Africa mostly covered by savannas. Satellite data were acquired in 2018-2020 covering the entire Mozambican territory: Global Ecosystem Dynamics Investigation (GEDI) lidar, Advanced Land Observing Satellite-1 (ALOS-1) Phased Array type L-band SAR-2 (PALSAR-2) and Landsat-8 Operational Land Imager (OLI). Ground reference measurements of AGB were made between 2015 and 2018 in the context of Mozambique’s National Forest Inventory. The methodology relied on (i) establishing a relationship between AGB and Lorey’s height from ground observations (n=659); (ii) applying (i) to GEDI canopy height observations to obtain an extended spatially distributed set of AGB estimates; and (iii) model AGB estimates from (ii) as a function of radar (ALOS-2 PALSAR-2) and optical (Landsat 8 OLI) data. We used a data-driven algorithm (Random Forests) to retrieve AGB using predictors obtained from satellite observations.
Using an independent test subset, the best model had a root mean square error (RMSE), bias, and standard deviation of 21.1 t/ha (26.7%), -0.4 t/ha and 21.1 t/ha, respectively. However, a slight overestimation below 100 t/ha and underestimation above this value was observed. Improvements are likely to occur from establishing better relationships between reference AGB and Lorey's height.
Anthropogenic activities like urbanization, intense land use and unsustainable land management practices together with an increase in weather and climate extremes, including longer and more severe droughts and an intensification of precipitation events, lead to a strong increase in soil erosion risk. The Lake Kivu and Ruzizi River basin, located in the trans-boundary region of the Democratic Republic of Congo, the Republic of Rwanda and the Republic of Burundi, is characterized by a mountainous topography and a high population density and is therefore particularly threatened by soil erosion.
We present an analysis to assess and monitor soil erosion risk parameters in the Lake Kivu and Ruzizi region utilizing Earth observation technologies. The remote sensing analysis is performed to enrich an already existing comprehensive Revised Universal Soil Loss Equation (RUSLE) study, which was performed by our local partners for the same region. For the investigation of the complex setting in the study region, we focus on vegetation dynamics and uncovered soil in combination with climate data.
Low vegetation cover, for example caused by inappropriate land management or droughts, can lead to a strong destabilization of top soil and with this increases its vulnerability to erosion. The vegetation dynamics in the region are analyzed using the Normalized Difference Vegetation Index (NDVI). For this, the more than 20-years long MODIS NDVI time series is used. This is combined with Sentinel-2 NDVI data for the past 5 years which is calculated from more than 1000 granules and aggregated to monthly medians. With the long time period covered by MODIS and the high resolution of Sentinel-2, areas and time periods are detected, for which vegetation cover is low and erosion risk is increased. The ESA-CCI S2 prototype landcover (LC) map with 20m resolution is used to relate the vegetation dynamics to land-use and land-cover classes.
In a second step, the local precipitation pattern is analyzed. For this, the daily product of Rainfall Estimates from Rain Gauge and Satellite Observations (CHIRPS) data set is used. With this data, we are able to determine periods of drought but also short-term strong precipitation events. When strong precipitation and low vegetation cover affect a region in our study area, we consider this a high-risk region.
As an additional indicator for soil erosion and soil erosion intensity, turbidity information of Lake Kivu and of Lake Tanganyika is analyzed. The turbidity analysis is based on the Lake Water Quality (LWQ) product of the Copernicus Land Monitoring Service (CLMS). For exemplary cases, we will show whether an increased erosion risk, determined by precipitation and vegetation pattern, leads to an increase in lake turbidity in the following days, which can be considered as proof for soil erosion.
The analysis shows the capabilities and constraints of using remote sensing data to assess soil erosion risk parameters on a spatio-temporal scale without considering further relevant parameters like topography or soil types. Furthermore, the comparison to land use and land cover information can support the identification of land use practices that lead to an increase in soil erosion and, hence, to a loss of fertile topsoil. The presented approach contributes to a better understanding of soil erosion risk parameters and is conducted with freely and globally available remote sensing data which makes it scalable and transferable to other regions of the World.
Mozambique has undergone constant changes in its land use and land cover since the end of the civil war in 1992. These changes are a consequence of the drastic increase in population, agricultural expansion, and several periods of economic development and crises (Temudo and Silva, 2012). Agricultural production in this country is largely characterized by family farming (CGAP, 2016). However, the growing number of investments in large-scale commercial agriculture has also contributed to cropland expansion and land consolidation (Zoomers, 2013), often causing smallholders' displacement, marginalization, and fragmentation (Meyfroidt, 2017). The MIDLAND project aims to map field size and improve understanding of the land use dynamics between smallholders and large-scale farmers, with a particular focus on the four northern provinces of Mozambique (an area of ~400,000 km2). An initial study by Bey et al. (2020) used Landsat images to accurately map small-scale and large-scale cropland in Gurué district (5600 km2) in three different periods: 2005–2007, 2011–2013, and 2015-2017. Their results showed that smallholder field crop expansion was still much more extensive than large-scale cropland expansion (1213.8 and 57.6 km2 of expansion between 2006 and 2016, respectively). However, mapping field size over the larger region of Northern Mozambique has thus far proved challenging, as in Mozambique 34% of agricultural areas are less than 1 ha (CGAP, 2016), and very high-resolution data is required to map these areas. In addition, there is broad heterogeneity in the crop types and management practices used by smallholders. However, our recent work builds on newly available remote sensing datasets with a higher spatial resolution, such as Sentinel-2 and PlanetScope, to improve the understanding of field size, land use, and land cover in Mozambique. Our preliminary results using PlanetScope imagery with 3.7 m spatial resolution for 2019 have proved effective in mapping smallholder field size in four study areas in Northern Mozambique (Cabo Delgado, Niassa, Nampula, and Zambezia). These recent results could be extrapolated to the entire North region, generating detailed maps that could improve understanding the land use dynamics, issues related to leakage, the trade-off between large and small-scale producers (Meyfroidt, 2017).
Bey, A., Jetimane, J., Lisboa, S.N., Ribeiro, N., Sitoe, A. and Meyfroidt, P., 2020. Mapping smallholder and large-scale cropland dynamics with a flexible classification system and pixel-based composites in an emerging frontier of Mozambique. Remote Sensing of Environment, 239, p.111611.
CGAP. National Survey and Segmentation of Smallholder Households in Mozambique. The Consultative Group to Assist the Poor. 2016.
Meyfroidt, P., 2017. Mapping farm size globally: benchmarking the smallholders debate. Environmental Research Letters, 12(3), p.031002.
Temudo, M.P. and Silva, J.M., 2012. Agriculture and forest cover changes in post-war Mozambique. Journal of Land Use Science, 7(4), p.425-442.
Zoomers, A. Lidar com a corrida global à terra: uma análise crítica das políticas rurais sobre a terra, desde os anos 50. In: Serra, C.M., Carrilho, J. Dinâmicas da ocupação e do uso da terra em Moçambique. Maputo: Escola Editora, 2013. p. 13-50. https://doi.org/10.11606/t.91.2007.tde-16032007-165103
Numerical Weather Prediction Models (NWPM) can only provide very accurate forecasts when fed with a high quantity of input data. Such data can be very heterogeneous in time and spatial resolution, availability, coverage, and quality. For example, Zenith Total Delay (ZTD) provided by GNSS stations are point-wise measurements with fine temporal resolution. The same can be said for ground-based radar measurements, which are very finely sampled in time but are very local in space.
Space-borne platforms are complementary since they can provide wide and dense area coverage but at a limited time resolution.
One recent example is represented by Synthetic Aperture Radar (SAR) meteorology. A SAR is an active remote sensing system, meaning that it transmits an Electromagnetic (EM) wave in the direction of the ground and receives back the echo. The sensor precisely measures the two-way travel time of the EM wave. The system can sense different conditions in the medium's pressure, temperature, and humidity, making SAR images desirable input data for NWPM.
SAR-based ZTD maps are even more desirable when other measurements are scarcely available in time or space. This is the case of the African continent, where GNSS stations are very few (since the continent is so large), the installation of ground-based radar can be complicate, and the electricity supply is very often too unreliable to install in-situ instrumentations.
The European project TWIGA (“Transforming Weather Water data into value-added Information services for sustainable Growth in Africa” – grant agreement No.776691) aims at providing currently unavailable information on weather, water, and climate for sub-Saharan Africa.
One of the main goals of TWIGA is to provide in-situ and space-borne measurements of tropospheric delays to improve weather forecasts.
Several GNSS stations have been installed in Kenya, Uganda, Ghana, and South Africa. A novel technique has been developed in the framework of the project to extract dense and accurate SAR-based ZTD maps. The final objective of TWIGA, in fact, is to provide products at Technology Readiness Level 7 (i.e., a system prototype demonstration in an operational environment).
In this project, ground-based measurements that are finely sampled in time but coarsely in space are used as a calibration set for space-borne measures that are finely sampled in space but coarsely in time.
The developed workflow is particularly suited for a short-time revisit, C-Band SAR such as the European (Copernicus) Sentinel-1. With its 6/12 days revisit period and 5.4 GHz operational frequency of the radar, Sentinel-1 is the perfect instrument to obtain highly accurate ZTD maps. Moreover, such maps are wide (more than 250 km of image size) and dense.
The proposed presentation aims at summarizing the activities carried out by Politecnico di Milano and GReD® within the TWIGA project.
The presentation will start from the theoretical work behind the generation of GNSS-based and SAR-based ZTD maps, then a few case studies will be proposed. First, the initial experiments in Italy (northern Italy and central Italy), with the validation of the atmospheric maps using a network of GNSS stations.
The work then proceeded in Africa, and the presentation will provide information about the installed GNSS stations, the technique, and the software used to process them. Results and validation of the GNSS and SAR processing are reported with case studies carried out in three pilot sites (South Africa, Uganda, and Kenya). These three case studies have the objective to prove the maturity of the procedure and its capability to work in heterogeneous scenarios.
Introduction
Mangrove Watch Africa (MWA) is a component of Wetlands International’s “Mangrove Capital Africa” programme, which aims to safeguard and restore African mangrove ecosystems for the benefit of people and nature. MWA is closely linked to Global Mangrove Watch (GMW) and as such also builds capacity and promotes the use of Earth Observation in relevant policy and corporate sectors. By taking an end-user centred approach MWA produces and provides state-of-the-art data and actionable information to different stakeholders.
Mangrove Disturbance Alerts
At the request of on-the-ground stakeholders, a new dataset to detect changes in mangrove cover has been developed in MWA and made available on the GMW platform ( https://globalmangrovewatch.org ): the mangrove disturbance alerts. Using a multi-sensor approach this dataset covers African mangroves as identified in GMW v2.0 (Bunting et al, 2018). Changes in mangrove coverage were identified using a combination of USGS Landsat 8 (LS8) and ESA Sentinel-2 (S2) data and being made available on the GMW platform on a monthly basis.
Potential change features were identified within pixels masked by the 2016 GMW mangrove extent layer; where NDVI values were < 0.2. To combine the scene based potential change features and to filter false positives a scoring system was used where pixels were scored based on the number of times they have been identified as a change. For a change to be confirmed (i.e., score of >=5) it needs to be observed at least 3 times, where if LS8 or S2 identify a change then 2 is added to the score. Changes identified within the LS8 and S2 sensors were considered to be more reliable and less frequent (due to cloud cover). If no change was identified for a pixel, which was previously identified as a change, and has a score > 0, then 1 was removed from the score. If the score has a value of 5 or greater then the pixel was deemed to be a ‘True’ change. The score cannot go below 0 or above 5. Processing was undertaken on a 20 m pixel grid, and then resampled to 60 m for presentation.
The system was implemented using the EODataDown system (https://github.com/remotesensinginfo/eodatadown), which automates the download and analysis ready data (ARD) processing of ESA Sentinel 1 and 2, and USGS Landsat imagery allowing an earth observation based monitoring system to be implemented with minimal effort where the system can be set to run on a regular basis (e.g., daily, weekly etc.) as the user requires. The EODataDown system supports a plugin architecture which allows custom processing following the production of an ARD product. The mangrove change alerts present here are calculated using two plugins, the first identifying potential changes within the scene and the second merging those changes onto a single pixel grid for Africa on which the scoring is defined.
Active promotion of the platform and capacity building, for example by providing training on the use of drones to monitor mangroves, are now starting to pay off, with increasing anecdotal evidence of this data being used on the ground, as well as examples of on-the-ground data contributing back and improving GMW products.
A real life example is the use of mangrove disturbance alerts by the Institute for Biodiversity and Protected Areas, a government agency in Guinea Bissau.
Global Mangrove Watch Platform: https://globalmangrovewatch.org
Mangrove Disturbance Alerts Scripts: https://github.com/globalmangrovewatch/gmw_monitoring_demo
Satellite data is a particular valuable resource in migration analysis as it enables performing systematic, consistent and accurate monitoring of areas (no matter how remote or inaccessible) affected by conflicts or by environmental hazards. Satellite-based technologies are particularly important when it comes to assessing the impacts of climate change events, and the deterioration of productive sectors, such as agriculture, which in turn ends up forcing (climate-led) migration.
Drought and flood are common extreme events that have significantly affected Somalia within the last decade. For example, during a drought between 2015 and early 2017, more than 800,000 Somalis fled their homes. In another case, more than 30,000 people were affected by flood events that occurred in 2016 in the Hiiraan region.
ESA’s Sentinel-1 (S1) and Sentinel-2 (S2) satellites provide a new generation of high resolution and high revisit frequency, free of cost, EO imagery, which is driving and advancement in deep learning algorithms, enabling for more robust monitoring and mapping of extreme climate events on a large geographical scale.
The work was carried out as part of the HumMingBird project funded by the European Commission Horizon 2020 Research and Innovation Programme under grant agreement No 870661 (https://hummingbird-h2020.eu/). The aim is to develop a better understanding of flood and drought events in Somalia and their impact on agriculture and population, which lead to high rates of internal displacement between January 2016 and December 2019. To this aim, a series of EO products were generated, including Agricultural Drought Indicator (ADI), flood extent, Land Cover (LC), and Land Cover Change (LCC) maps.
Firstly, ADI was developed based on a cause-effect relationship for agricultural drought, whereby a precipitation shortage leads to soil moisture deficit, resulting in a reduction of vegetation productivity. In total, 48 monthly ADI time-series were produced by combining Normalised Difference Vegetation Index (NDVI) anomalies (derived from Sentinel-2), Soil Moisture Index (SMI) anomalies (derived from the ESA Climate Change Initiative), plus SPEI-1 and SPEI-3 anomalies (acquired from the Spanish National Research Council) resampled at 20 m spatial resolution. Fifteen Sentinel-2 tiles were selected to cover the study area resulting in a total of 240 multi-temporal Sentinel-2 images per tile which were needed to have full temporal coverage between January 2016 and December 2019. Sixteen districts in Central and Southern Somalia were considered to cover an area of 111,578 km2. In total, 4169 Sentinel-2 Level-1C tiles were downloaded from Google Cloud and pre-processed to Level-2A. The ADI maps were validated by visual comparison against drought reports from the United Nations Office for the Coordination of Humanitarian Affairs. It is concluded that the ADI map successfully identified drought events that were reported.
The second set of EO-based products were generated in over thirty-one districts in the South of Somalia. Flood extent was derived using a pixel-based change detection and thresholding approach applied to the intensity ratio derived from a pre- and a post-flood S1 scene (20 m resolution, VV polarisation). Slope and Height Above Drainage (HAND) masks derived from the NASADEM were used to mask out areas considered unlikely to flood. Finally, post-processing of the resulting S1 flood maps was performed by applying morphological binary closing. The output flood extent maps were compared both visually and quantitatively to other available products derived from S2 reaching, in some cases, an overall accuracy of 85%. The resulting S1 derived flood extent maps captured both major and minor flood events along the River Shebelle and River Jubba including frequent flood events in several districts which caused severe damage and led to the displacement of thousands of people.
LC and LCC maps were the third group of EO products generated to assess the impact of flood and drought on agricultural fields and human settlements. A state-of-art Deep Learning architecture called U-Net was used to produce the LC maps. The convolutional nature of U-Nets, with them being able to incorporate information from neighbouring pixels in the prediction and more powerful detection of large scale structure. Sentinel-2 data was the only input data to generate the LC maps. Seventeen LC maps were produced according to the flood events, while eight aggregate LC maps were produced, providing an overall representation of the land cover for each agricultural growing season. After that, LC change maps were produced to highlight the changes in agricultural fields and human settlements due to flood and drought events. LC maps covering the areas affected by flood events were produced by comparing the LC maps generated after flood events to the corresponding aggregate LC maps. For drought, twenty-four LCC maps were produced by comparing each month’s average Land Cover aggregate to the previous year’s aggregate map. The internal assessment shows that the model's overall accuracy on an unseen test set was 92.9%. The validation using the ground truth data estimated an overall accuracy over a single Sentinel-2 tile (38NNL with an area of 12,604 km2) and entire study area (with an area of 180,756 km2) as 77% and 65%, respectively. Regarding the LC Change map over the extent of the S1 derived flood maps, our results show that there was a clear impact of floods on agricultural land and settlements close to the affected areas. Meanwhile, analysis of the LCC map generated over areas affected by drought events showed that there was a significant fraction of agricultural fields that were destroyed or damaged.
In conclusion, this study demonstrated the contribution of Copernicus Programme optical and radar data towards efficiently and accurately mapping and monitoring the impact of extreme climate events like drought and flood on agriculture and population.
The African Framework for Research, Innovation, Communities and Applications (EO AFRICA) is an ESA initiative in collaboration with the African Union Commission that aims to foster an African-European R&D partnership facilitating the sustainable adoption of Earth Observation and related space technologies in Africa. EO AFRICA R&D Facility is the flagship of EO AFRICA with the overarching goals of enabling an active research community and promoting creative and collaborative innovation processes by providing funding, advanced training, and computing resources. The Innovation Lab is a state-of-the-art Cloud Computing infrastructure provided by the Facility to 30 research projects of African-European research tandems and participants of the capacity development activities of the Space Academy. The Innovation Lab creates new opportunities for innovative research to develop EO algorithms and applications adapted to African challenges and needs, through interactive Virtual Research Environments (VREs) with ready-to-use research and EO analysis software, and facilitated access to a wide range of analysis-ready EO datasets by leveraging the host DIAS infrastructure.
The Innovation Lab is a cloud-based, user-friendly, and versatile Platform as a service (PaaS) that allows the users to develop, test, run, and optimize their research code making full use of the Copernicus DIAS infrastructure and a tailor-made interactive computing environment for geospatial analysis. Co-located data and computing services enable fast data exploitation and analysis, which in turn facilitates the utilization of multi-spectral spatiotemporal big data and machine learning methods. Each user has direct access to all online EO data available on the host DIAS (CreoDIAS), especially for Africa, and if required, can also request archived data, which is automatically retrieved and made available within a short delay. The Innovation Lab also supports user-provided in-situ data and allows access to EO data on the Cloud (e.g., other DIASes, CNES PEPS, Copernicus Hub, etc.) through a unified and easy-to-use and open-source data access API (EODAG). Because all data access and analysis are performed on the server-side, the platform does not require a fast Internet connection, and it is adapted for low bandwidth access to enable active collaboration of African – European research tandems. As a minimum configuration, each user has access to computing units with four virtual CPUs, 32 GB RAM, 100 GB local SSD storage, and 1 TB network storage. To a limited extent and for specific needs (e.g., AI applications like Deep Learning), GPU-enabled computing units are also provided.
The user interface of the Innovation Lab allows the use of interactive Jupyter notebooks through the JupyterLab environment, which is served by a JupyterHub deployment with improved security and scalability features. For advanced research code development purposes, the Innovation Lab features a web-based VS Code integrated development environment, which provides specialized tools for programming in different languages, such as Python and R. Code analytics tools are also available for benchmarking, code profiling, and memory/performance monitoring. For specific EO workflows that require exploiting desktop applications (e.g., ESA SNAP, QGIS) for pre-processing, analysis, or visualization purposes, the Innovation Lab provides a web-based remote desktop with ready-to-use EO desktop applications. The users can also customize their working environment by using standard package managers.
As endorsed by the European Commission Open Science approach, data and code sharing and versioning are crucial to allow reuse and reproduction of the algorithms, workflows, and results. In this context, the Innovation Lab has tools integrated into its interactive development environment that provide direct access to code repositories and allow easy version control. Although public code repositories (e.g., Github) are advised for better visibility, the Innovation Lab also includes a dedicated code repository to support the users' particular needs (e.g., storage of sensitive information). The assets (e.g., files, folders) stored on the platform can be easily accessed and shared externally through the FileBrowser tool.
Besides providing a state-of-the-art computing infrastructure, the Innovation Lab also includes other necessary services to ensure a comfortable virtual research experience. All research projects granted by the EO AFRICA R&D Facility receive dedicated technical support for the Innovation Lab facilities. Scientific support and advice from senior researchers and experts for developing geospatial computing workflows are also provided. Users are able to request support contacting a helpdesk via a dedicated ticketing and chat system.
After a 6-month development and testing period, the Innovation Lab became operational in September 2021. The first field testing of the platform took place in November 2021 during a 3-day hackathon jointly organized by EO AFRICA R&D, GMES & Africa, and CURAT as part of the AfricaGIS 2021 conference. Forty participants utilized the platform to develop innovative solutions to food security and water resources challenges, such as the impact of the COVID-19 pandemic on agricultural production or linking the decrease in agricultural production to armed conflicts. The activity was successful and similar ones are expected to be organized during major GIS and EO conferences in Africa during the lifetime of the project. Thirty research projects of African-European research tandems granted by the Facility will utilize the Innovation Lab to develop innovative and open-source EO algorithms and applications, preferably as interactive notebooks, adapted to African solutions to African challenges in food security and water scarcity by leveraging cutting-edge cloud-based data access and computing infrastructure. The call for the first 15 research projects was published in November 2021, and the projects are expected to start using the Innovation Lab in February 2022.
In parallel, the Innovation Lab provides the computing environment for the capacity development activities of the EO AFRICA R&D Facility, which are organized under the umbrella of EO AFRICA Space Academy. These capacity development activities include several MOOCs, webinars, online and face-to-face courses designed and tailored to improve the knowledge and skills of African researchers in the utilization of Cloud Computing technology to work with EO data. Selected participants of the capacity development activities will use the Innovation Lab during their training. Moreover, the instructors in the Facility use the Innovation Lab to develop the training materials for the Space Academy. Access to the Innovation Lab will also be granted to individual researchers and EO experts depending on the use case and resource availability. Application for access can be made easily through the EO AFRICA R&D web portal after becoming a member of the EO AFRICA Community.
The presentation will provide further details of the features and capabilities of the EO AFRICA R&D Innovation Lab . Lessons learnt from the first training and R&D activities will also be presented.
The final level-1b version of MIPAS spectral data, version 8, is the basis of the improved level-2 IMK/IAA research data product that covers all observations modes (NOM, UTLS, MA, UA and NLC) and consists of: temperature, O3, H2O, CH4, N2O, CO, HNO3, NO2, NO, HNO4, BrONO2, N2O5, PAN, ClONO2, ClO, HOCl, CCl4, CH3Cl, COCl2, COF2, CFC-11, CFC-12, HCFC-22, CFC-113, CF4, C2H6, C2H2, HCN, HCOOH, H2CO, CH3OH, acetone, HDO, H2O2, heavy ozone isotopologues, SO2, OCS, SF6, NH3, sulphate aerosol, ammonium nitrate aerosol, PSCs, PMCs, and CO2 (above 65 km).
Improvements with respect to previous versions include, among others: actual and more reliable CO2 profiles for T-LOS retrieval from a bias-corrected WACCM-SD version; inclusion of a full 3D-field of temperatures for subsequent trace gas retrievals; inclusion of horizontal gradients of trace gases in the retrieval; improved spectroscopic data; improved Non-LTE modelling for several trace gases; improved treatment of the background continuum emission.
The data come on a constant fine vertical altitude grid; for each geolocation, pressure, temperature, trace gas volume mixing ratios (vmrs), vertical resolution and the diagonal of the averaging kernel matrix are provided. Also, as a further improvement, an elaborate error estimation for each single profile, compliant with the SPARC TUNER initiative and described in von Clarmann et al. (TUNER-compliant error estimation for MIPAS, AMT 2021, to be submitted), is provided. In addition, as an easy-to-use data product, the abundances of most of the retrieved trace gases are also provided on a coarser vertical grid (constant over location and time for an individual trace gas) for which the values along the vertical profiles are independent of each other and thus, the averaging kernel matrix transforms to a unity matrix. The vertical grid points are a subset of the pressure levels used by models participating in the CCMI initiative, and the abundances are presented as a step function with constant vmr in each vertical grid cell. This presentation allows for a fast and direct comparison to model simulations.
We present examples for temperature and several trace species and demonstrate, partly by validation against independent reference data, the improvements against earlier data versions.
The environmental and socio-economic setup of the African continent poses special challenges to Earth Observation (EO) research. The challenges can only be met with a broad scientific collaboration, using the most advanced technologies, knowledge and research facilities. This is in the focus of the ESA EO AFRICA (African Framework for Research Innovation, Communities and Applications in Earth Observation) R&D Facility, which aims at building an African-European research and development (R&D) partnership to facilitate the sustainable adoption of EO and related space technologies in Africa, in harmony with the “Agenda 2063 – The Africa we want" of the African Union Commission (AUC). The Facility is an overarching long-term (>10 years) programme, from which the first three years are implemented by a consortium of six partners. Their activity builds on ESA's rich experience of the TIGER Initiative which provided capacity building through research in Africa in the period of 2006-2018. The Facility’s niche is the support of capacity development through and for research. This approach complements other related programmes, such as GMES & Africa and Digital Earth Africa, and contributes to an active community of EO experts in the continent.
To facilitate research-based capacity development, the following activities take place:
- Providing technical and financial support for 30 research studies in the collaboration of African and European research partners to address the African EO research challenges related to water scarcity and food security.
- Strengthening research capacities with the latest know-how, a digital Space Academy is established to provide capacity development actions in the forms of face-to-face courses, online training events, webinars and a massive open online course (MOOC).
- Developing a cloud-based research facility hosted by one of the European Data and Information Access Services. This Innovation Lab offers tools for interactive EO data analysis for the projects and the corresponding R&D activities, both for accessing and utilising EO data, high-level products, in-situ data, as well as open-source codes (algorithms, models, tools).
In response to the first call announced by the AUC and ESA, fifteen projects will be already active at the time of the Living Planet Symposium 2022. Furthermore, experiences of several capacity development events will be available. The presentation will give an overview about the research challenges, the applied approaches and the first lessons learned.
MULTI-TEMPORAL SENTINEL-1 SAR AND SENTINEL-2 FOR IMPROVING FLOOD MAPPING ACCURACY AND DAMAGE ASSESSMENT IN MOZAMBIQUE
M. Nhangumbe 1, 2, A. Nascetti 1, Y. Ban 1
1 Division of GeoInformatics, Department of Urban Planning and Environment, KTH-Royal Institute of Technology, Stockholm, Sweden-(manue, nascetti, yifang)@kth.se
2 Dep. Of Mathematics and Computer Science, Faculty of Science, Eduardo Mondlane University, Maputo, Mozambique
KEY WORDS: Sentinel-1, Sentinel-2, Automated flood mapping, classification, Damage Assessment.
ABSTRACT
Floods have increased in frequency and intensity in recent years as a result of climate change. Moreover, they occur in an increasingly diverse set of locations and bring massive destruction of infrastructures, destruction of agricultural fields, human loss and displacement. This event causes food insecurity, creates or increases poverty in the affected populations, and in some cases largely contributes to the emergence of water borne diseases. Mozambique is considered the most prone country to flooding in Southern Africa. In the last years, the frequency of tropical cyclones that come with heavy rains has increased (Cyclone Idai on 15th March 2019, Cyclone Kenneth on 25th April 2019, Cyclone Chalane on 30th December 2020, Cyclone Eloise on 23rd January 2021, Cyclone Guambe on 19th February 2021) creating considerable damages on infrastructure, human displacement and devastation on agricultural fields. Cyclone Idai, for example, destroyed 90% of the city of Beira (3th largest city in Mozambique), the capital of Sofala province in the central part of the country [1]. Covering large areas at regular revisits, satellite remote sensing is playing an important role in disaster management such as floods, fire, cyclones, earthquakes etc., especially for preparedness, warnings and emergency response.
In Mozambique, several studies have been conducted in collaboration with the National Institute for Disaster Management (locally called INGD) to analyze the risks of floods and mitigate the impact, including creating some algorithms for mapping flooded areas using drones. However, cost-effective large-scale mapping and damage assessment methods are a high priority [2]. With the launches of Sentinel-1 and Sentinel-2, free and open data with global coverage, large swath and high temporal resolution became routinely available. In this research we aimed to investigate multitemporal Sentinel-1 SAR (10m resolution), Sentinel-2 MSI (10m resolution). We investigated the use of a freely available benchmark data set released in 2021 for a code challenge competition. This data is based on Sentinel-1 imagery from 11 flood events identified from a global database of flood event areas from the Dartmouth Flood Observatory collection. The dataset provided high detailed reference maps of several flood events. The winner solution achieved an overall accuracy of 0.809 in terms of IoU (Intersection over Union) metric.
In this work we compare supervised and unsupervised methods extending the dataset from uni-temporal to bi-temporal Sentinel-1 imagery (before and after the flood event) and splitting the reference maps into training and validation set. We performed the unsupervised classification exploiting the capabilities of Google Earth Engine (GEE) platform. The data is ingested into GEE, and since it comprises of stacks of VH and VV Sentinel-1 flood images, we computed a pixel based classification combining VH and VV backscattering information. After the classification we apply a Gaussian filter to reduce the noise in the flood mask improving the overall accuracy. We then repeat the procedure adding Sentinel-1 flooding data of cyclone Idai and Kenneth. To compute the accuracy we compare our results with the validation data by checking how close our method detects floods compared to validation data. Moreover, we investigated the use of Sentinel-2 MSI to produce a land cover map of the two study areas and estimate the percentage of flooded areas in each land cover class. The preliminary results show that the combination of Sentinel-1 SAR and Sentinel-2 MSI data is promising for near real-time flood mapping and damage assessment. Using the images acquired from 19th to 22nd March 2019 by Sentinel-1 SAR, it was possible to automatically map flooded areas with an overall accuracy of about 87% in different moments, highlighting the evolution of the flood over time. We plan to share the method and the developed code as free and open-source software to support future research on this topic and promote the use of this data during the emergency as support cartographical information.
References
[1] World atlas. 2017 Mozambique census. (https://www.worldatlas.com/articles/biggest-cities-in-mozambique.html). Accessed: 09 December 2021.
[2] Tralli, David M., et al. "Satellite remote sensing of earthquake, volcano, flood, landslide and coastal inundation hazards." ISPRS Journal of Photogrammetry and Remote Sensing 59.4 (2005): 185-198.
Sustainable intensification in sub-Saharan Africa (SSA) is needed to feed the growing population while tackling challenges around climate change and ecosystem degradation. Cropland expansion is a common strategy for boosting agricultural production in SSA which often leads to economic, environmental, and social trade-offs. Meanwhile, the current low level of agriculture intensification results in low yield. There is a need for improving crop yield in order to close the yield gap. Best agriculture management practices still need to be recognized and promoted to achieve sustainable food production. To guide policies on sustainable agriculture management, this study firstly aims to evaluate cropland degradation and explore its association with cropland expansion, and secondly to investigate the impacts of crop diversification on yield and soil quality. To achieve the above aims, this study used multi-source satellite data, socio-economic data, and intensive field data (field size, crop yield, crop types), combined with satellite image classification and trend analysis. Drawing on a case study in Malawi, the preliminary results found rapid cropland expansion between 2010 and 2019 (an increase of 8.5% of total land area) and reduced cropland productivity. Half of the maize fields across the country were intercropped, showing a higher level of crop diversification with spatially mixed legume crops such as cowpeas. Intercropping practice leads to a prolonged growing season and greater overall yield, but lower maize yield which is the main staple food. Despite mixed pixels resulting from intercrops, our results found that Sentinel-2 red-edge Vegetation Indices (VI) based model could estimate maize yield with moderate accuracy (R2=0.51, RMSE = 1.47 t/ha). Our findings underscore the need for taking measures to promote sustainable intensification. Meanwhile, our study provides evidence of the sustainability of intercropping practices in SSA and highlights the importance of monitoring intercropping to better guide and promote sustainable intensification.
Complex migration processes massively influence the African continent and are intrinsically linked to ongoing demographic, social, economic, and ecological changes (Steinbrink and Niedenführ, 2020). To better understand the complexity of these processes, there is a need for knowledge and data on the flows, the drivers, and the effects of migration. Due to the large scale and dynamism of migration in Africa, such data must be available and comparable both at large scale and over time.
Traditional data sources do not meet these criteria. For instance, the frequency of national censuses is not sufficient to track migration in the African context. Surveys provide a deeper insight into local or individual migrations, but are prohibitively expensive to be performed at larger scales. Besides, administrative records only include registered individuals and are available only in a few countries (Kirchberger, 2021). Thus, new data sources are necessary.
Earth Observation is increasingly able to accurately map drivers and effects of migration which have a visible impact on the earth’s surface. Environmental drivers, such as floods or desertification (Neumann et al., 2015) and also the land cover changes, such as urban growth resulting from rural-urban migration (Abass et al., 2018) can be monitored from multi-sensor remote sensing data. However, there are limitations in mapping socioeconomic and political drivers, and social changes that result from migration. These can only be mapped using EO data by proxies. What is more, individual migration decisions might not be taken based on the physical reality of push- and pull factors that we observe with remote sensing, but rather on their subjective perception by the individual, which does not necessarily match the reality (Hoffmann et. al, 2021).
These gaps in Earth Observation can be reduced by additional data sources. Social media provides a platform that allows individuals to communicate ideas, concerns, and opinions. The content produced by users has the potential to bring new insights that complement those of satellite-based earth observation and thus capture the picture of migration in Africa in a more comprehensive manner. Further, geolocated social media posts allow for the reconstruction of users’ movement patterns (Zagheni et al., 2014).
In this context, the social media platform Twitter provides free data from the public conversation for research. For the example of Nigeria, we demonstrate how tweets can contribute to the understanding of dynamic migration processes. We present three ways in which information on migration may be derived from Tweets.
Firstly, the geolocation of tweets can be used to detect changes in the residence of users. Thus, migration flows can be mapped and spatial migration patterns at a national and international scale can be analyzed (Chi et al., 2021). Secondly, aggregated metadata on language, device use, activity, and network metrics can be used to better understand the demographics of migrants and non-migrants and their integration (Lamanna et al., 2018). Thirdly, the content of the tweets can be analyzed using quantitative and qualitative methods. The combination of statistics, machine learning, natural language processing, and linguistics methods to assess the text content brings insights about which migration factors are prominent in the social space (Havas et al., 2021). These three approaches are complementary. We demonstrate how they can be jointly used to provide a multifaceted view of Nigerian migration over a five years timespan between 2015 and 2019.
Our results are statistical differences between migrants and non-migrant populations and maps of Twitter users' migration flows. Additionally, we are conducting a preliminary analysis on topic modelling and sentiment analysis. In doing so, we demonstrate that social media data provide multifaceted information on migration at a large scale that exceeds traditional sources and thus become a plausible complement to remotely sensed earth observation data. We also acknowledge and outline limitations and pitfalls of the data and the methods, as well as necessary ethical considerations (Cesare, 2018). While some are inherent, others can be mitigated through study design and technical solutions.
Future work will focus on the integration of remotely sensed information and social media derived information in a joint Earth Observation approach. We expect that the integration of heterogeneous sources will be a challenge, but only by combining information from different perspectives can we get a full overview over the African migration processes that does justice to their complexity.
References:
Abass, Kabila, Selase Kofi Adanu, and Seth Agyemang. 2018. “Peri-Urbanisation and Loss of Arable Land in Kumasi Metropolis in Three Decades: Evidence from Remote Sensing Image Analysis.” Land Use Policy 72 (March): 470–79. https://doi.org/10.1016/j.landusepol.2018.01.013.
Cesare, Nina, Hedwig Lee, Tyler McCormick, Emma Spiro, and Emilio Zagheni. 2018. “Promises and Pitfalls of Using Digital Traces for Demographic Research.” Demography 55 (5): 1979–99. https://doi.org/10.1007/s13524-018-0715-2.
Chi, Guanghua, Fengyang Lin, Guangqing Chi, and Joshua Blumenstock. 2020. “A General Approach to Detecting Migration Events in Digital Trace Data.” Edited by Song Gao. PLOS ONE 15 (10): e0239408. https://doi.org/10.1371/journal.pone.0239408.
Havas, Clemens, Lorenz Wendlinger, Julian Stier, Sahib Julka, Veronika Krieger, Cornelia Ferner, Andreas Petutschnig, Michael Granitzer, Stefan Wegenkittl, and Bernd Resch. 2021. “Spatio-Temporal Machine Learning Analysis of Social Media Data and Refugee Movement Statistics.” ISPRS International Journal of Geo-Information 10 (8): 498. https://doi.org/10.3390/ijgi10080498.
Hoffmann, Roman, Barbora Šedová, and Kira Vinke. 2021. “Improving the Evidence Base: A Methodological Review of the Quantitative Climate Migration Literature.” Global Environmental Change 71 (November): 102367. https://doi.org/10.1016/j.gloenvcha.2021.102367.
Kirchberger, Martina. 2021. “Measuring Internal Migration.” Regional Science and Urban Economics, July, 103714. https://doi.org/10.1016/j.regsciurbeco.2021.103714.
Lamanna, Fabio, Maxime Lenormand, María Henar Salas-Olmedo, Gustavo Romanillos, Bruno Gonçalves, and José J. Ramasco. 2018. “Immigrant Community Integration in World Cities.” Edited by Renaud Lambiotte. PLOS ONE 13 (3): e0191612. https://doi.org/10.1371/journal.pone.0191612.
Neumann, Kathleen, Diana Sietz, Henk Hilderink, Peter Janssen, Marcel Kok, and Han van Dijk. 2015. “Environmental Drivers of Human Migration in Drylands – A Spatial Picture.” Applied Geography 56 (January): 116–26. https://doi.org/10.1016/j.apgeog.2014.11.021.
Steinbrink, Malte, and Hannah Niedenführ. 2020. Africa on the Move: Migration, Translocal Livelihoods and Rural Development in Sub-Saharan Africa. Springer Geography. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-22841-5.
Zagheni, Emilio, Venkata Rama Kiran Garimella, Ingmar Weber, and Bogdan State. 2014. “Inferring International and Internal Migration Patterns from Twitter Data.” In Proceedings of the 23rd International Conference on World Wide Web, 439–44. Seoul Korea: ACM. https://doi.org/10.1145/2567948.2576930.
West Nile Virus (WNV), a flavivirus that was first isolated in Uganda in 1937, is one of the most recent emerging mosquito-borne pathogens in Europe. While its main enzootic cycle occurs between mosquitoes and birds, humans might act as incidental hosts. About 25% of the human infections develop symptoms such as fever and headache and less than 1% more severe neurological diseases, which eventually might be fatal. WNV is now endemic in many European countries, causing hundreds of human cases every year, usually towards the end of summer, with a high spatial and temporal heterogeneity.
WNV transmission is largely affected by temperature. In fact, it affects mosquito population dynamics by influencing survival and developmental times. Temperature is also paramount at shaping viral circulation: warmer conditions might increase the mosquito biting rate and decrease the incubation period of the virus, thus accelerating WNV transmission.
Previous studies, carried out in northern Italy, have suggested that spring temperature might play a key role at shaping WNV transmission. Specifically, warmer temperatures in April-May might amplify WNV circulation, thus increasing the risk for human transmission later in the year. To test this hypothesis, we collated publicly available data, collected by the European Centre for Disease Prevention and Control, on the number of human infections recorded in Europe between 2011 and 2019. For each region of interest and year between 2003 and 2019, daily average temperature data were obtained from the gap-free Moderate Resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature (LST) maps.
We quantified the relationship between human cases and spring temperature, considering both average conditions (over years 2003-2010) and deviations from the average for subsequent years (2011-2019), by applying generalized linear models. We found a significant positive association both spatially (average conditions) and temporally (deviations). The former indicates that WNV circulation is higher in usually warmer regions while the latter implies that an increase in spring temperature is positively associated with an increase of WNV transmission and could therefore be considered as an early warning to enhance surveillance and vector control. We also found a positive association between human cases and WNV detection during the previous year, which can be interpreted as an indication of the reliability of the surveillance system but also of WNV overwintering capacity.
Our findings highlight that weather anomalies at the beginning of the mosquito breeding season might act as an early warning signal for public health authorities, enabling them to strengthen in advance ongoing surveillance and prevention strategies.
The H2020 Mood project is focused on using ‘big data’ to provide risk assessments to Public Health professionals for a range of diseases including many Vector Borne Diseases such as Dengue, Tick Borne Encephalitis, West Nile Fever, and Congo Crimean Hemorrhagic Fever. Producing these risk assessments requires substantial support to provide the covariate datasets that drive the disease models. These datasets span a wide range of types of information, ranging from demographic, socio-economic and agricultural information to Earth Observation (EO) imagery.
This last set of environmental and climatic parameters are derived from a variety of EO and terrestrial sources , and comprise both near real time and long-term synoptic summaries produced by data reduction techniques such as Temporal Fourier Analysis of extensive time series of decadal imagery. The distributions of both disease vectors and hosts are also produced as additional drivers, either using species aggregation protocols, or estimated by spatial distribution modelling employing machine learning techniques and ensembling the outputs of several methods.
The project includes dedicated work packages to identify source, acquire, process, and supply the covariate datasets that drive the disease models to the modelers within the project as well as to external users that have requested access to such data. The presentation provides an overview of these data streams and driver modelling modelling techniques, taking examples from a number of diseases. These procedures include not only the driver modelling, but also parameter identification and selection, variable data reduction, static and dynamic suitability definition and masking, and data dissemination.
The presentation also discusses the particular requirements of providing such information to both the modellers, and to external users in the Public health arena who often need the products produced by academic research teams as publishable outputs to be adapted and adjusted for more practically oriented use, as well as made accessible in formats more suited to risk assessments than raster images.
African trypanosomiasis, which is mainly transmitted by tsetse flies (Glossina spp.), is a threat to public health and a significant hindrance to animal production. Tools that can reduce tsetse densities and interrupt disease transmission exist, but their large-scale deployment is limited by high implementation costs. This is in part limited by the absence of knowledge of breeding sites and dispersal data, and tools that can predict these in the absence of ground-truthing.
In Kenya, tsetse collections were carried out in 261 randomized points within Shimba Hills National Reserve (SHNR) and villages up to 5 km from the reserve boundary between 2017 and 2019. Considering their limited dispersal rate, we used in situ observations of newly emerged flies that had not had a blood meal (teneral) as a proxy for active breeding locations. We fitted commonly used species distribution models linking teneral and non-teneral tsetse presence with satellite-derived vegetation cover type fractions, greenness, temperature, and soil texture and moisture indices separately for the wet and dry season. Model performance was assessed with area under curve (AUC) statistics, while the maximum sum of sensitivity and specificity was used to classify suitable breeding or foraging sites.
Glossina pallidipes flies were caught in 47% of the 261 traps, with teneral flies accounting for 37% of these traps. Fitted models were more accurate for the teneral flies (AUC = 0.83) as compared to the non-teneral (AUC = 0.73). The probability of teneral fly occurrence increased with woodland fractions but decreased with cropland fractions. During the wet season, the likelihood of teneral flies occurring decreased as silt content increased. Adult tsetse flies were less likely to be trapped in areas with average land surface temperatures below 24 °C. The models predicted that 63% of the potential tsetse breeding area was within the SHNR, but also indicated potential breeding pockets outside the reserve.
Modelling tsetse occurrence data disaggregated by life stages with time series of satellite-derived variables enabled the spatial characterization of potential breeding and foraging sites for G. pallidipes. Our models provide insight into tsetse bionomics and aid in characterising tsetse infestations and thus prioritizing control areas.
In the last decades Europe has been invaded and colonized by alien Asian Aedes mosquito species. These not only cause severe nuisance problems due to their aggressive daytime outdoor biting behaviour but can transmit exotic tropical viruses - such as dengue, Zika, chikungunya, yellow fever - which show an increasing trend worldwide and risk to become major health emergencies also in temperate regions. Moreover, in Sub-Saharan region malaria continue to have a devasting impact on people health mostly due to vectorial capacity and the adaptation of human made change of the major vector Anopheles gambiae.
Identification of suitable environmental conditions of potential vectors requires a spatial explicit analysis of eco-biogeographic, socio-economic and climatic drivers.
Successful case studies for mosquito-borne disease related studies in the Italian and African territory using different Earth Observation and numerical modeling datasets, describing the spatial and temporal dimensions, are presented.
High resolution land cover thematic maps are essential to determine the habitat suitability of mosquito vector species. High and very high resolution optical multispectral remote sensing data are an alternative data source to derive land cover data when such thematic information is not available with adequate spatial detail. In addition, Earth Observation derived vegetation and water indices provided key information to evaluate mosquito habitat suitability. Climate variables are significant drivers of vectors population distribution. Land Surface Temperature (LST) estimated from satellite measurements provided a fine scale perspective of temperature variability in space. Copernicus Climate Change Service assimilates Earth Observation data in numerical models to provide comprehensive climate information covering a wide range of components of the Earth-system, with timescales spanning decades, which were used to assess spatial suitability and temporal variability of mosquito populations.
Results confirms that in situ mosquito samplings and thematic information, complemented with citizen science, monitoring data and a statistical modeling (frequentist or Bayesian), can be used to identify preferences in terms of environmental conditions that determines their distribution and abundance, understand mosquito populations dynamics, assessing the outbreaks risk of different arboviruses, and provide useful indications to prioritize public mosquito control measures and the effectiveness of control interventions.
The experience gained demonstrates the importance in developing analytical methodologies to capitalize spatially explicit datasets from Earth Observation and Copernicus Services information, in order to provide public institutions and administrations involved in mosquito and mosquito-borne pathogen surveillance with relevant information to achieve challenging goals in vector-borne diseases control measures.
The escalation of natural and man-made disasters, exacerbated by climate change, has caused unprecedented social and economic losses for governments, businesses, and households around the world. Every year, natural disasters generate an average of US$150 billion in economic losses, directly impacting 200 million people . In the absence of appropriate Disaster Risk Financing (DRF) and Management solutions, fragile and vulnerable countries face significant financial losses through their contingent liability and emergency response efforts, as well as long-term recovery and reconstruction costs.
DRF policies and instruments offer governments a potent tool to mitigate the large range of risks associated with natural and man-made disasters by improving financial preparedness and resilient recoveries through ex-ante financial planning. DRF-powered analytics enable governments and policymakers to make risk-informed decisions to quantify, plan, and predict risks with more certainty, as well as to design adequate financial response mechanisms accordingly . New technologies play a significant role in enhancing DRF analytics. Big Data (BD) and Earth Observation (EO) technologies offer information with unprecedented resolution and comprehensive coverage. Machine Learning (ML) and Artificial Intelligence (AI) algorithms can help process these vast amounts of data and provide near-real-time risk information and more accurate assessments. This information can ultimately improve the timeliness and efficiency of better targeted financial response to affected communities, businesses, and industries.
The World Bank Crisis and Disaster Risk Finance (CDRF) team supports governments through advisory services and risk financing solutions to strengthen their resilience to climate, crisis, and natural disaster shocks. The Crisis Risk Finance Analytics program (CRFA), founded in 2019, supports new and existing World Bank engagements related to innovative technology for improved risk finance applications. In particular, the program has integrated innovative data sources, including EO and BD technologies, as well as analytical methods (e.g., parametric product design, image-processing, AI) to improve risk financing and management.
To that end, the World Bank CDRF team has established a partnership with the European Space Agency's Center for Earth Observation (ESA/ESRIN) to leverage such technological advances for improved pre-arranged financing. Established in 2019, the partnership focuses on leveraging remote sensing, online/social media/ big data, and predictive analytics to support global level identification of risks, national/sectoral diagnostics, and project-specific activities to enable better informed and earlier financial response to crises. A core objective is the development of timely, reliable risk metrics and triggers, as well as innovative approaches to assess overlapping risks in complex situations. As such, the joint partnership with ESA provides the missing link between technology and operations, by providing project-specific risk finance analytics. It focuses on supporting the CRFA program, whose ultimate objective lies in the scaling-up of risk financing operations in a sustainable, robust, and transparent manner.
This paper presents a selection of projects from the CRFA program as part of the World Bank work to apply innovative technology for risk finance applications. It describes the successes and challenges faced by the program to date, highlighting the potential of satellite imagery, Big Data, and advanced analytics techniques for DRF, as well as the implications of various governance and partnership models with the private and public sectors to bring some of these applications to scale.
The emergence of the concept of a “humanitarian-development-peace” nexus in the last decade has demonstrated the overlapping impact of what were once considered disparate issues. Climate-related shocks, protracted conflict, political instability, and structural inequality are all contributing to modern humanitarian crises, with the picture further complicated by COVID-19.
At the same time, rapidly developing technology means we have access to more data than ever before, allowing us to unravel this complexity. Satellites have the potential to inform and enhance humanitarian decision making and risk assessment through the provision of unique geospatial datasets and communications infrastructure. Currently, however, the satellite tools, equipment, and data are not always deployed by or distributed to those who need them the most.
Caribou Space, with the support of the UK Foreign, Commonwealth and Development Office (FCDO) backed Humanitarian Innovation Hub is leading research initiative that explores the current state-of-play in the use of satellite applications in humanitarian emergencies with a view to identifying catalytic initiatives that could accelerate the uptake and use of these technologies.
The objectives of the research are threefold:
1. Raise awareness of the potential use cases for satellites in humanitarian programming for a diverse global humanitarian community
2. Learn lessons from past experiences of developing and utilising satellite applications to anticipate, respond to and recover from humanitarian emergency events
3. Generate actionable recommendations on opportunities that could ensure that satellite applications are accessible to and appropriate for the humanitarian community.
Raise awareness
Our report sets out an analytical framework which identifies the main use cases for satellite technology within five domain areas – disasters, health emergencies, food insecurity, conflict and security and population displacement.
We have profiled 500 satellite applications deployed across these domains to generate insights about the current state of play – e.g., the composition of the supply chain for satellite applications, the business models used to sustain existing applications, number of applications for different humanitarian use cases.
Learn Lessons
The research initiative has engaged a diverse community of humanitarian stakeholders in order to gather insights from the data providers and user community who have been involved in deploying and using satellite applications and can provide lessons into what has worked well and areas for improvement. This data gathering exercise has highlighted several barriers that may inhibit, hinder or delay increased use of satellite applications in the humanitarian sector.
Generate actionable recommendations
Caribou Space will be exploring a number of areas in which there could be an opportunity to use public and private financing, expertise and capacity to accelerate uptake of and investment into satellite applications for humanitarian assistance. The research will also give concrete recommendations on the key partnerships and collaboration models that offer the greatest potential for transformational impact.
This session will share the findings of the Beyond Borders research, which are expected to be finalised by April 2022. The session will also offer an early insight into potential future initiatives and will invite feedback and discussion on how the community can best collaborate to achieve positive humanitarian outcomes.
We believe this abstract aligns well to the LPS review criteria:
1. Degree of innovation: It is the most extensive profiling of satellite applications for humanitarian domains with over 500 entries.
2. Technical correctness and validation: The research process to profile the 500 satellite applications included extensive quality assurance checks from Caribou Space team members. Each profile had 15 information fields with specific definitions to ensure correctness and consistency.
3. Caribou Space sourced extensive external review to ensure the research and recommendations were valid. This included an Advisory Group of eight domain experts including ESA, World Bank, UK FCDO, UK Space Agency, UN OCHA, Columbia University, Humanitarian Open Street Map, and Global Partnership for Sustainable Development Data. It also included consultation and feedback workshops with the wider humanitarian and satellite communities.
4. Relevance: The research shows how satellite applications using EO, SatComms and GNSS, apply to the key humanitarian domains of disasters, health emergencies, food insecurity, conflict and security and population displacement. To ensure the research is relevant for both the space sector and humanitarian communities it includes actionable recommendations for industry, policymakers, development organisations, and governments.
Since 2008, the European Space Agency (ESA) has been cooperating with International Financial Institutions (IFIs) to support International Development through the use of Earth Observation. The first partnership campaign, Eoworld (from 2008 to 2015), mainly focused on raising awareness on the potential and capabilities of EO services for development projects. Since then, the Earth Observation for Sustainable Development (EO4SD) initiative has demonstrated the potential of Earth Observation (EO) services through strategic cooperation activities including regional demonstrations and capacity building efforts. The Global Development Assistance (GDA) programme, formally set up by ESA member states in late 2019, is building on the success and lessons learnt from EO4SD to advance towards an operational mainstreaming of satellite EO in development.
Though Earth Observation is starting to be widely recognised as being able to support many aspects and sectors of Development Assistance, substantial evidence is still needed in order to understand to which extent such programmes are efficient in mainstreaming and transferring EO into operational working processes. In order to shape better future programmatic activities that will contribute to the acceptance and adoption of EO in the Development sector, Monitoring and Evaluation along with impact assessments have been found valuable in order to ensure the delivery of tangible results tailored to users’ needs, and to determine the inherent benefits and limitations of EO uses for Development operations.
In this context, this presentation will outline the evolution of IFIs interest in EO products and services through the introduction of key metrics about IFIs projects integrating EO technologies, in order to identify selected trends for using EO data in development activities. This study is based on the scanning of IFIs publicly published procurement notes and calls for tenders which explicitly mentioned EO components, starting from 2015. Both quantitative and qualitative data such as the number of procurements and their allocated budgets, the region of implementation as well as the main domain addressed (e.g., agriculture, climate, …) were extracted, compiled and analysed in order to create statistics. Besides, this study will be further complemented by the main conclusions of an impact analysis conducted for the EO4SD programme in order to identify potential positive and negative factors influencing IFIs’ demand for EO services and products. It is believed that analysing the IFIs’ demand for EO will help in better identifying key IFIs priorities as well as main strengths, challenges and opportunities for the uptake of EO products and services in the development sector. It will thus contribute to a higher agility and a stronger programmatic ownership of IFIs in advancing joint implementation of the GDA programme, and, most importantly, in accelerating systematic adoption of EO in the development sector.
Water is key to sustainable development, being critical for socio-economic development, energy and food production, and healthy ecosystems. Today water scarcity affects more than 40 percent of the world’s population and is projected to rise further, exacerbated by climate change. As the global population grows, there is an increasing need to balance the competing demands for water resources and have more efficient ways to manage water supply. As the demand for freshwater increases, the importance Integrated Water Resource Management (IWRM) increases. A requirement for effective IWRM is access to reliable data and information on water related issues. There is a growing awareness that Earth Observation (EO) data has the potential to serve these data needs especially in the context of the sustainable Development Goals (SDGs) and International Financing Institutions (IFIs) whose mandate is to provide and catalyze investments fostering sustainable development.
This presentation will provide examples on how Earth Observation is supporting development organizations and national agencies with better and more timely information to report in response to the global water agenda, to support more evidence-based water management decisions and to improve water governance with compliance monitoring.
More specifically the presentation will review how UNEP is using global Earth Observation data and open information sharing for SDG661 progress reporting and for improving the understanding of threats and solutions to drive actions for the protection and restoration of freshwater ecosystems.
At the local/national level the presentation will focus on how EO data can be used to support water management decisions and water governance in Zimbabwe and Malawi.
In Zimbabwe small water bodies and reservoirs play a vital role in the food and energy security of the region, and yet existing national inventories are incomplete with the surface area (and water levels) being available for only a few hundred out of an estimated 10.000 small water bodies and reservoirs. The presentation will show how Sentinel data can help the authorities get a complete picture of the surface water area and the water storage changes of water bodies in Zimbabwe, most which are within the mixed pixels at the Landsat resolution and hence goes undetected by current global surface water products like the European Commission Joint Research Center’s Global Surface Water Explorer (JRC-GSWE).
In Malawi water resources are under increasing strain, and why the national water resource authority recently developed and implemented a new water license system. Yet, not having a clear overview of the large water users, especially irrigators, is a challenge that the authorities face to properly effectuate the water licensing. This is where Earth Observation can help, and the final example in this presentation will show how free and open satellite data can be used to map and monitor the extent of irrigation at national scale to compare with the actual licensed area and identification of non-licensed water usage.
As a closing remark the presentation will also review some of the key challenges and bottlenecks which impede the wider use of EO in international development assistance, especially at national scale with clients who are financially constraint and short of ICT resources and technical expertise.
Remote Sensing is frequently used in agriculture. It is a valid instrument for farmers to large areas monitoring but also for specific crop analysis. Precise and detailed information is the key to carrying out targeted cultivation interventions, identifying the best harvest times, and improving food quality, maximizing profitability. Collecting the requirements from different user typologies ranging from farmers, producers’ associations to insurance companies, a new branch is added to the Rheticus brand specific for agriculture.
Rheticus® Agriculture is a satellite-based service designed to help farmers in daily crop production tasks. This tool allows for reducing parcel variability. Generating a land cover map it allows to optimize the farm’s management at the local, regional and national levels. Further, by using the biomass health indicator (vegetation index, Reflectance Index, Moisture Index, Leaf Area Index, Water index), the service can also work as a diagnostic tool and serve as an early warning system, allowing the agricultural community to detect and counter potential problems before they spread widely and negatively impact crop productivity. Through an intuitive and easy-to-use dashboard, users get access to dynamic maps and reports that make it easy to identify the field vegetative vigour and, therefore, the food ripening level.
Rheticus Agriculture is recently used for the “Support to Water and Food Security Planning and Investments in Indonesia through Earth Observation Services” funded by the Asian Development Bank (ADB). The ADB is supporting the government in the preparation of a series of investments for flood risks management, agriculture, and aquaculture promoting the use of Remote Sensing for detailed and efficient geospatial analysis and planning.
The project, led by Terradue, involves 5 experts in different domains: Terradue for the landslides, LIST (Luxembourg Institute of Science and Technology) for the flood, NOC (UK National Oceanography Centre) for aquaculture, and Planetek for agriculture, building stability, and water quality. This project aims to fulfil the request for support for disaster resilience planning, environmental change, and capacity building in the following thematic areas: subsidence, flooding, stability of buildings and infrastructures, crop and water use, and inland and marine aquaculture.
For what concern the agriculture, some products for the detection of changes in vegetation, crops, and rural infrastructures are land cover and land use maps, geo-analytics, and biomass health indicators over the time were implemented with innovative algorithms and provided by the Rheticus Agriculture portal. Products cover the areas of Nord Sumatra, Java, and East Indonesia.
Rheticus® Agriculture has been used as well for another project funded by ESA, named CRITE (stands for: “Coffee Rehabilitation In Timor-Leste”). The project aims to provide supporting information and tools to Timor-Leste authorities which are implementing an initiative – assisted by the Asian Development Bank – for rehabilitating and improving coffee cultivation in the country. CRITE provides health status about the tree component of the coffee plantations systems, which refers to the shaded trees, as their status has an important impact on the understorey coffee health and productivity. A good shaded trees health status, indeed, allows a more efficient resource acquisition thanks to complementary and facilitation effects. The Rheticus® web platform allows an easy consultation of the vegetation health conditions analytics for each administrative district by period of observation. It helps identifying potential critical areas of poor productivity and setting priority actions.
Finally, another important case of crops and forest monitoring carried out by Planetek, refers to a project within the ESA EO CLINIC initiative, named “Ecosystem-Based Management in River Basins in the Philippines”. The objective was to demonstrate the feasibility of frequent mapping of forest cover in the Philippines based on EO data and support the analysis of the dynamics of the forest losses.
The presentation describes the Geoinformational Support for Integrated River Basins Management project, financed by the European Space Agency, and implemented by a Polish consortium of Geosystems Polska, Topologic Consulting and the Institute of Geodesy and Cartography. The main goal of the Geo4IRBM project was to quickly develop a number of geoinformation products to support the Asian Development Bank experts in the process of defining the assumptions of undertakings in the field of water resource management and flood protection in Indonesia.
The service area covered two catchments in the central part of Java with a total area of about 20000 kmsq. A number of products and services based on the use of satellite monitoring data have been developed for this area. The main source of data was the European Copernicus global satellite monitoring program. The developed products include: land cover and land cover changes maps, cropping intensity maps, coastal change maps, soil potential erosion maps, land subsidence monitoring, sedimentation rate analysis in selected dam reservoirs, as well as a surface water monitoring system with elements of water balance.
The delivered products have been used operationally in hydraulic modeling of the catchment area to asses floods risk, delimitation of areas at risk of intensive soil erosion and landslides, estimation of water deficits in agriculture and prioritization of investments in the area of maintaining and developing hydrological infrastructure. The project was supplemented with trainings for representatives of the Indonesian administration.
90% of all disasters in the last 20 years were climate-related. Extreme weather events are increasing in frequency and severity, causing huge socio-economic losses. This changing climate requires us to do more to avert, minimize and address the growing threat of loss and damage. Efficient tools to react faster in order to better adapt to current events and better prevent future impacts are needed.
Satellite data is a big data source and should be treated as such. We believe that in order to quickly respond and adapt to the vast diversity of use cases, a modular, automated and flexible system is the key. At EarthPulse we developed a full environment of independent AI modules in a way that can be combined and quickly tailored for a big number of different cases.
This session presents the approach of EarthPulse to make satellite analytics accessible, easy and useful for all (experts and non-expert users) as a strategic tool for climate change adaptation, through its AI4EO approach.
The applied AI models as well as the EarthPulse core technology to process satellite data following an End2End approach and built in a modular basis, optimises the processing time of these big data sources, allowing for an AGILE and easily adaptable solution, decreasing cost and complexity.
Furthermore, we combine the predictions of our AI4EO models with other data sources to provide additional value to users so they can take better data-driven and informed decisions, creating aggregated indicators.
As a demonstration, we will showcase how our Impact Pulse works, particularly for Floods (Flooding Impact Pulse), presenting the results of its application in Cambodia and Timor Leste, as part of a Proof of Concept developed for the Asian Development Bank (ADB).
The Flooding Impact Pulse is an aggregated indicator combining satellite data with open datasets and generated by deep learning algorithms. It quantifies the observed damage caused by floods in a scalable way, determining the economic, social and environmental effects of this kind of events.
In the concrete case for the ADB, we evaluated the damage caused by the storms/natural floods in October 2020 in the south of the Banteay Meanchey province of Cambodia, resulting in 105,656 people affected and 21,471 displaced; 107 schools affected meaning 23,858 students; 71 roads, 3 health centres and 47% of crops affected in the region.
The monitoring of the status in the 3 months following the event allowed also to identify which areas could recover and which ones were not able to do it.
The methodology was the following one:
1. First, the water extension is evaluated by generating flood masks using neural networks over Earth Observation images, including images with different sensors (optical, radar) and resolutions.
2. Then, we map the different assets by using our Deep Learning (DL) algorithms, for example to map the crops in the area. Other data such as population, census or roads are gathered from Opendata sources and cross-linked with satellite data.
3. Finally, the impact on these key assets is evaluated applying classification and segmentation models, quantifying crops destroyed, infrastructure damaged (roads and education & health centres), and people affected, as well as how it is recovered in the weeks following the storm events.
It is of major importance to notice how changing the contents of each step (fires instead of flooding, or adding other data such as GDP) can quickly build a totally different Impact or Vulnerability Pulse such as “access to urban amenities of vulnerable population”, or “food chain security”. The main advantage lies on the adaptability.
With this PoC we have validated how quickly we can map big affected areas and quantify the highest losses. The easy way to plug and play our solution makes it adaptable for different user requirements, empowering them with the capability to extract the EO value through AI effortless, introducing a new powerful tool into their current work practices.
To empower users in the use of this new tool, we have also participated in the Learnings for Economics and Policy Series of ADB, encouraging discussion in a workshop with more than 70 ADB staff, consultants and external counterparts (https://www.youtube.com/watch?v=fMfE6Vg3pXo).
Beyond data and technology, the discussion with the ADB team allowed identifying concrete user requirements and clarifying expected outcomes and fears, confirming the key role of user empowerment for the uptake of EO based solutions.
The underlying causes of coastal zone degradation in Africa are poverty, inequality, poor governance, and population growth. These factors drive the demand for, for instance, fuel wood from important coastal ecosystems such as mangroves. If, consequently, important coastal ecosystems such as mangroves are degraded, they lose their function as fish spawning grounds, to capture above and below ground carbon and as buffers against coastal erosion. Coastal erosion due to climate change induced sea level rises is estimated to cost coastal countries in West Africa between 2,5 to 5,4 % of their GDP (World Bank estimate for 2017). Restoring and conserving coastal eco-systems can be effectively supported by using Earth Observation (EO) information, especially information on coastal erosion and accretion and explicit land use and land cover maps in connection with these morphological changes. Regarding mangroves as particularly carbon and biodiversity rich coastal ecosystems, EO can be used to monitor the status of restoration actions and the health of existing mangrove stands. Identifying intact mangrove stands would help planners to prioritize conservation actions and implement alternative and sustainable livelihood activities, such as bee keeping. The main objective of this work is to show the possibilities of how ESA Sentinel-2-based coastal morphology dynamics and other geospatial information can be integrated for coastal risk zone monitoring and to guide coastal ecosystems restoration efforts. Essentially, we aim to use the EO-based information to enhance conservation efforts of coastal ecosystems and guide the establishment of sustainable livelihood options for coastal communities throughout Africa.
For Togo, Senegal and Benin in West Africa and Zanzibar and Kenya in eastern Africa, a web-based application was developed that uses cloud-processed L1C Sentinel-2 time-line data to visualize and monitor the severity and occurrence of coastal erosion and accretion. Within given years and/or seasons, all available level 1C top-of-the-atmosphere imagery, starting from 2015 to 2020, were ingested and processed (“back-end” processing pipeline). Effectively, Tasseled Cap (TC) wetness, brightness and greenness spectral features were computed as medians for every quarter (i.e., 3 months) and every year. A spectral-based Change Vector Analysis (CVA) procedure was subsequently implemented that uses directions and magnitudes (“severity”) of change between the TC timeline features. TC CVA-based changes were essentially computed between quarters (intra-annually) and years (inter-annually). Using thresholds tested on the CVA data and the various seasonal and annual change data sets, “long term” erosion and accretion magnitudes were separated from seasonal or tidal changes. Geographical data on population density, important infrastructure facilities like hospitals, roads and schools and spatially explicit land cover were incorporated to identify areas of high exposure and vulnerability. Distance layers from hospitals, schools, and roads were computed and spatially related to the erosion zones. To specifically monitor and visualize mangrove stands for every year, the TC features were combined with digital elevation and distance from water bodies information (Sentinel-1-based). Using this method, it was possible to identify intact and degraded mangrove stands in all focus countries. For Zanzibar and Gazi in Kenya, the location and identification of degraded mangrove stands and coastal erosion “hot spot” priority sites, in terms of where sedimentation stabilization is most needed, were integrated to help identify suitable mangrove restoration sites. Expert knowledge gathered through online consultations were used to ascertain the accuracies of the coastal morphology results and usefulness of the online applications as such.
The results on coastal morphology and mangrove extends are available as mobile applications for each region, i.e. for western Africa; https://rssgeetesting.users.earthengine.app/view/modesapp. In all African regions, it was possible to discern intra (or seasonal) from inter-annual coastal morphology changes. In western Africa, the integration of coastal morphology, with vulnerability risk components proved to be of utmost importance for identifying risk areas, relevant for coastal protection and planning. In eastern Africa, the integrative web-based information, specifically on coastal erosion severity, turned out to be important to effectively guide shoreline protection schemes such as ongoing mangrove restoration actions (including “enrichment” planting). Expert knowledge for western Africa showed that more than 95% of the areas identified as being highly affected by long term coastal erosion and accretion were accurately identified.
The results show the usefulness of inter-seasonal information from Sentinel-2 to mimic coastal morphology, as an important baseline data for multiple use cases. In the future, it is anticipated to use the readily available geospatial mangrove information to help establish bee keeping in these ecosystems. Bee keeping as a sustainable livelihood option would provide an incentive to conserve vital coastal ecosystems such as mangroves.
Earth observation (EO) and geospatial information (GI) technologies are rapidly adopted in the application domain of humanitarian action support. Technically, nearly all assets of remote sensing apply in such demanding scenarios. However, technical maturity is one aspect only: high quality generated geospatial information products need to consider various technical, conceptual and organisational levels. Protracted crises and large scale population displacements require quick and reliable, up-to-date information in various fields of humanitarian assistance including mission planning, resource deployment and monitoring, nutrition and vaccination campaigns, camp plotting, damage assessment, etc. Responding to the needs of several NGOs active in this domain, the Department of Geoinformatics (Z_GIS) at the University of Salzburg has built a strong record in information service development in the humanitarian sector, in collaboration with Médecins Sans Frontières (MSF). After ten years of intensive R&D work the service was taken over, and is since operationally offered, by spin-the spin-off company Spatial Services Ltd.
In its operational component the focus of this service is on EO-based population estimation based on automated dwelling counts from satellite imagery, complemented with other service elements regarding land cover, surface water, and other environmental resources. The benefits of an integrated use of EO and GI technologies, as compared to conventional field mapping, has made it a critical asset in decision making for humanitarian professionals. At the same time, considering the rapid growth of technology in this field, there is also an urgent need to investigate fundamental research questions to provide a solid basis for the outcomes on applied level. The Christian Doppler Laboratory (CDL) GEOHUM, recently established at the University of Salzburg, acts as a technology booster to enhance technical and organisational capacities matching specific requirements from humanitarian NGOs. Specifically, the lab strives to equip MSF in a public-private partnership, with advanced technologies and tools to make their missions more efficient. By leveraging cutting-edge technological developments, innovative solutions are created, together with MSF’s GIS Centre for more targeted humanitarian action. Main results are AI-supported information products to optimize logistics and mission planning in conflict and humanitarian disaster situations.
Three key components build the research framework of the lab following the information extraction, data assimilation, communication and delivery workflow. (1) “Img2Info” focuses on information extraction from imagery, including the integration of deep learning methods and expert systems for advanced dwelling extraction, the semi-automated generation of 3D building footprints, SAR-based high-resolution mapping and urban tomography. (2) “ConSense” assimilates existing topic-related or statistical information, e.g. global data sets, and image extracted information, to derive spatially aggregated indicators, e.g. for urban area characterisation. (3) “Info2Comm” investigates effective delivery and communication of complex geospatial information, including advanced geovisualisation techniques for absences and destructions, and demonstrates reproducible research as well as ethical (and legal) aspects within this data-sensitive field of application. The lab fosters the co-creation of novel test and benchmark environments for deep learning models and building hybrid AI solutions, novel concepts in space-time global reference systems, as well as a data assimilation toolbox leveraging from the emerging big (EO) data paradigm. This enhances the automation of critical technical elements of the entire information extraction, assimilation and delivery workflow, which directly flows into service evolution of the operational information service.
The European Space Agency (ESA) has closely partnered with the development finance sector for the last 13 years through different initiatives such as Eoworld (2008-2015), Earth Observation for Sustainable Development EO4SD (2016-2023), and now the Global Development Assistance GDA program (2020-2025). Through these initiatives, ESA’s main goal has been to add value to the IFIs (International Financial Institutions) activities in the development assistance context through Earth Observation (EO) data, in order to deliver positive impact on the developing countries involved, as well as position the European EO sector who provides the necessary skills and technical expertise.
The EO4SD initiative aims to match up space EO data and development finance, involving the private sector by creating consortia of companies to deliver integrated EO products to developing countries through IFIs to tackle today’s most important and urgent problems of the world, including sustainable agriculture, urban development, water resources management, climate resilience etc. Most of the EO4SD thematic clusters have by now concluded, with the exception of the one focused on forest management which is still active until 2023.
Large-scale developments and advancements as well as high and positive impact have been reported as outcome of these 3-4 years long dedicated activities, but what does “large”, “high” or even “positive” mean in this context? The presented analysis is based on measuring these concepts in an international development framework. It is of crucial importance to do that in order to optimize service provision to developing countries and to analyze how these improvements impact the private sector, the IFIs, and the developing countries themselves. This analysis on the added value of services provided through the EO4SD initiative will be made comparing all the thematic clusters through different key indicators and variables, closing the feedback loop from planning to outcome evaluation. The ambition thereby is to offer insight into the value EO4SD has been providing to public sector stakeholders in developing countries as well as the improved positioning of the European EO service sector in addressing demand channeled through IFI teams.
Significant works have done for the land stability monitoring in Palu region using the application of various Interferometry SAR (InSAR) processings, such as time series InSAR analysis of P-SBAS and SNAPPING PSI methods and basic InSAR processing of SNAP InSAR and DIAPASON methods on Sentinel-1 SAR data under Geohazards TEP platform. This project is held to support the process of rehabilitation and reconstruction in Central Sulawesi after the 2018 devastating earthquake and tsunami under “TA-9554 REG: Southeast Asia Urban Services Facility (Indonesia: Support for Emergency Assistance on Rehabilitation and Reconstruction in Central Sulawesi), Output 2: Monitoring and Evaluation of Reconstruction Efforts Enhanced” project funded by the Asian Development Bank (ADB).
In general, the comparative study of InSAR processing results and land stability mapping product derived using velocity data of SNAPPING PSI method data with GNSS data, field data and other land stability map product show good relationship. The InSAR data analysis and processing results have been shared with Indonesian counter partners in 3 capacity buildings during FY2021.
In response to the COVID-19 pandemic, governments around the world have enacted widespread physical distancing measures to prevent and control virus transmission. Quantitative, spatially-disaggregated information about the population-scale shifts in activity that have resulted from these measures is extremely scarce, particularly for regions outside of Europe and the US. Public health institutions have limited region-specific data about how control measures have affected societal behavior, patterns of exposure, and infection outcomes, nor about the recovery patterns of urban areas.
The Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS DNB), a new-generation space-borne low-light imager, has the potential to track changes in human activity, but that capability has not yet been applied to a cross-country analysis of COVID-19 responses. Here, we examine multi-year (2015–2020) daily time-series data derived from NASA’s Black Marble VIIRS nighttime lights product (VNP46A2) across thousands of cities in multiple countries to understand how urban activity has changed throughout the pandemic. We combine this data with time-lines of COVID-19 national restrictions to assess community adherence to control measures and recovery in activity once measures are lifted. We also introduce a global database of COVID-19 urban activity changes that relies on machine learning to identify changes in nighttime lights trends across the last two years. Nighttime lights capture the onset of national curfews and lockdowns well, but also expose the inconsistent response to control measures both across and within countries. Our findings show how satellite measurements can aid in assessing the public response to physical distancing policies, especially in fragile and data-sparse regions, and can map where and how quickly urban activity is bouncing back to pre-pandemic levels.
What would be better than using space technology to get a global understanding of our living planet?
Applications derived from Earth observation (EO) satellites can bring social economic benefits in the context of health and economic crisis. During the Covid-19, several commercial value-added products were developed to address new social economic challenges. In this paper, we are presenting innovative value-added products derived from a EO satellite constellation which collects night imaging, video from space as well as hyperspectral imagery.
The NightVision & Video Constellation (JL1-SP & JL1-GF satellites), developed by CGSTL and commercialized by HEAD, which is currently the only operational constellation, consists of nine on-orbit satellites offering true color night imagery at 1m range and color video from space. It is a multi-channel and radiation calibrated satellite constellation designed to detect buildings and street light. During the Covid-10 lockdown period in 2020, night image collected from space allowed the province government in China to carry out lock-down control by measuring the brightness of urban building area, industrial area and commercial centers which were supposed to be closed. Other use cases include infrastructure application by using night image to control the usage and the distribution of street light network. The brightness of street light can be detected by night image with algorithm to measure light intensity, thus implement a better electricity assumption control of a city. Another use case such as using nigh image during electricity cut incident, daily monitoring on the incident area can be fulfilled with the nine on-orbit satellites.
This constellation offers color video space at 1m resolution with three time per day. This paper will present few video demonstrations with use case such as vehicle speed measure and ship detection application. A dedication algorithm was developed for traffic management by measuring the speed of the vehicle. By defining the default authorized speed, the algorithm thus distinguishes three categories of vehicle driving below, accepted or above the authorized speed.
Another two on-orbit satellites, HyperScan, with two identical satellites (JL1-GP1), offer 25 bands at 5m, 10m and 20m resolution and provides operational imagery for applications such as vegetation assessment and disaster management thanks to its spectral characteristics such as red-edge spectrum, Sensitivity to chlorophyll, biomass as well as Smoke Penetration, Fire Point Recognition, Cloud and Snow Discrimination. The on-board AI system allows fire point recognition, cloud detection and ship recognition functionalities.
1. Introduction
Gathering information about planet Earth's environmental and social systems is essential to support natural and human-induced changes. As we live in an interconnected world, where the boundaries are more political than physical, scientific knowledge provides crucial information for decision making, specialized training, guidance for the population in general, and technological or informational updates in government and private systems.
In practice, science does not only improve education, it indeed increases people's life, reduces different risks, finds more efficient and effective ways to perform certain tasks, and guides society when we face new humankind challenges, such as the development of a vaccine against COVID-19 pandemic. Nonetheless, the great volume of generated data, produced at a high velocity and presenting an impressive variety of elements to support scientific projects (Laney, 2001), Remote Sensing data can be also classified as Big Data. In this manner, it presents at the same time an incredible opportunity to understand complex systems and extract important information, and a considerable challenge to access the data, process and retrieve the crucial information.
In this context, Knowledge Discovery in Databases (KDD) (Fayyad et al., 1996) is an essential process that connects brute data and the extraction of useful information. Even though KDD is directly or indirectly used in Earth Observation (EO) Science, the required steps for its development, communication and application are yet nebulous, especially among those with little or no knowledge of data science. The aim of this work is to present a complete guide to develop and apply EO Science, from the Remote Sensing data acquisition until the practical knowledge application.
2. Practical Guide
The developed guide was designed in three main procedures (Figure 1): i) Remote Sensing: Data Acquisition; ii) Knowledge Discovery in Databases; iii) Earth Observation Science Flowchart.
2.1. Remote Sensing: Data Acquisition and Databases
The process of detecting and monitoring at distance physical targets on Earth’s surface by the radiation reflection and emission is known as Remote Sensing (RS). Typically, the RS sensors are onboard Earth Observation Satellites, and the target area is scanned by them. Once the data is collected by the sensors, it is electronically transmitted to Earth by means of Ground Receiving Station (GRS). After the required processing data, it is stored in databases and distributed by RS image catalogs.
2.2. Knowledge Discovery in Databases Framework
The process of discovering useful knowledge from datasets is known as KDD (Fayyad et al., 1996). Considered a multidisciplinary procedure, KDD is composed mainly of five steps: i) data selection; ii) preprocessing; iii) data transformation; iv) data mining; v) interpretation/evaluation.
According to the goal of the research, databases are selected and preprocessed, such as cleaning the data, including removing noise, identifying duplicates and handling missing values. It is especially true for the RS Time Series (sequence of observations recorded over time).
The processed data is then reduced and transformed according to the aim of the project, for instance, data normalization, aggregation, and standardization. After that, data mining methods can be applied into the transformed data, and evaluated in an exploratory analysis. Among the data mining methods, we named here classification, regression and clustering. When necessary, we can redo previous steps one or more times for further iteration.
Finally, with the mined patterns, we can now discover knowledge in databases. Although the whole process is important, much attention has been given in the literature for the data mining step.
2.3. Earth Observation Science Flowchart
The EO Science flowchart for RS Big Data is composed of five main steps: i) statement of the problem; ii) data acquisition and databases; iii) KDD; iv) communication; v) knowledge application.
The statement of the problem is the first step in order to set objectives and develop scientific hypotheses. After that, we acquire the necessary data, for example, through RS image catalogs, followed by the application of KDD to extract useful information for the research. In the communication step, also known as public awareness, it is common that researchers only communicate their work in scientific journals. Nonetheless, besides that scientific community and the encouragement of further studies, it is also possible to share the research results with the general public by means of: social media, magazines, interviews, lectures/talks, reports and manuals, or executive summaries for decision-makers. Reporting the knowledge also required an adaptation of the used language according to the target public.
While a range of scientists are focused on the whole frame until the communication by means of journals, a minor group, for different reasons, is also concerned about the knowledge application, the last step of the work frame. To do so, scientists can be engaged with other public and private entities/share and stakeholders at different hierarchical levels and scales aiming at concrete strategies, decision making and actions for urban and environmental management, such as: capacitation of public agents, orientation for best evidence-based practices, basic education, the actualization of action protocols, improvement of governance practice, conscientization, among so many others.
Scientific projects concerned with the knowledge application as a finish line should also consider the inclusion of share and stakeholders’ feedback across all EO Science Flowchart steps, from the research design (in order to add expertise and reality know-how) to the research achievements (to improve the impact of scientific results).
In cases where partnerships are already consolidated, online and updated portals can also support decision makers, such as the Global Earth Observation System of Systems (GEOSS) and the Brazilian Fire Monitoring Portal (Projeto Queimadas), which provides near-real-time environmental data in a democratization process of satellite data.
3. Conclusion
After the statement of the problem, access to databases and the application of KDD, researchers often communicate the found results in scientific journals, but rarely translate the knowledge to other entities or possible interested groups. In several scientific institutions, this gap could be fulfilled, for instance, by encouraging scientists to communicate their work on different platforms and by contracting science communicators, whose attention would be totally focused on it.
Furthermore, an even minor research group is worried about the knowledge application, which can assume different forms, such as the aforementioned strategies, decision making and actions. In this context, we developed this guide not only in order to support scientists to comprehend the whole process of knowledge production and application, but also to encourage the creation of more solid bridges between EO Science and general society, where communication and knowledge application have crucial roles. After all, in an interconnected world, we all should be benefited from scientific advances, not only scientists.
Finally, we would like to draw attention to the importance of professors to guide young scientists into the EO Science flowchart. Climate change challenges will demand even more from science in the near future, once it is the only way to ensure rapid responses to complex problems through effective public policies.
4. References
Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery: an overview. Advances in knowledge discovery and data mining, 1-30.
Laney, D. (2001). 3D data management: Controlling data volume, velocity and variety. META group research note, 6(70), 1.
This study examines the state of the art and the maturity adoption of the satellite-based applications in the health domain. Moreover, it deepens the main drivers and barriers for their effective development.
Living in a more globalized, intertwined, and technologically advanced world has opened the door to a more digital healthcare system able to connect many actors, to reach remote locations, and to grant accessibility to a wider portion of the global population. The Covid-19 pandemic is making this transition even more urgent and relevant.
Besides, space technologies may be a valid asset to tackle the future challenges of population health. According to the United Nations Office for Outer Space Affairs (UNOOSA) and the World Health Organization (WHO), Earth observation satellites, through continuous monitoring of the environment and atmosphere, may favor the identification of risk factors related to the onset of diseases in the geographical areas analyzed. Satellite telecommunications technologies can improve the use of services related to telemedicine, tele-health, tele-epidemiology and emergency management. Navigation technologies can improve the tracking of rescue routes, optimize assistance services and contribute to the location and tracking of infections on the territory. Finally, space has always been and will always be the privileged place to deepen the knowledge of the human body and the effects of protocols or technologies to cure it, such as the study on astronaut osteoporosis.
Recent years have been marked by the rising of new stakeholders along the space economy value chain, especially in Europe. Urgent need of digital solutions to develop value-added services for end users in the health domain has fostered the steep growing of new businesses and satellite-based applications. This trend has been observed along the whole value chain of the space economy ecosystem, especially in the end users market, in which the stakeholders may take advantage from the integration of satellite and non-satellite data.
However, despite the clear linkage of the two domains, the existing body of knowledge lacks a holistic view of the phenomenon on the current adoption of space assets in the health domain. Moreover, the economic, political, social and environmental factors that limit or foster the development of satellite-based applications in health management and healthcare delivery are still unclear and require a thorough investigation.
Therefore, the framework on the relationship between space activities and global health applications elaborated by the United Nations Office for Outer Space Affairs has been adapted and populated with 86 business-cases gathered from a systematic review of the ESA and NASA public databases.
The joint combination of satellite and digital technologies (e.g. social networks) appears as the enabling factor to provide an effective service to professional end users and improve quality of health for citizens. The importance of data combination from multiple satellite technologies and additional data is highlighted in many projects as fundamental to develop effective applications.
A high presence of projects has been observed in the joint domains of Earth Observation (EO) and Tele-Epidemiology and the Disaster Management. We spotted fifteen projects in Tele-Epidemiology, they mainly regard the activities of tracking disease and risk factors, vector-borne diseases, and water-borne diseases. Both SAR and Optical images are utilized, many times integrated. Indeed, optical satellites can detect with ease places with the presence of water, such as ponds, while radar satellites can analyze the moisture of the terrain. On the other hand, we found twenty-six projects in Disaster Management, satellite images were used mainly for disaster mapping and subsequent planning and response. In seven projects both SAR and Optical images were utilized in combination. Other seven projects manifested the presence of SAR satellites only. For the remaining projects, no information was found. While in the Tele-Epidemiology domain the use of SAR satellites is more associated with the discovery of natural proprieties of the soil to be linked to vector presence, in the Disaster Management domain their use enabled to complement Optical images, overcoming problems related to cloud coverage or nighttime, that negatively affect the functioning of those satellites in case of emergencies.
We also analyzed the contractors involved in the projects and the geographic area for the implementation of the projects. The UK appeared to be, together with Italy, one of the most active countries for the implementation of projects based on satellite technologies in the health domain. Regarding the prime contractors, Italy and the UK appeared again to contract most of the projects. In the Tele-Epidemiology field, instead, most of the projects was implemented in Africa with the majority of prime contractors coming from France and Belgium. This could be due to language similarity factors, that could simplify the collaboration in the projects. Nonetheless, the projects established in Africa relied intensively on developed countries stakeholders. Besides, the joint presence of both private and public actors is verified the majority of the projects.
In addition, a wide set of technical, economic, socio-cultural and political factors have been explored in terms of barriers to or opportunities for the adoption of satellite-based applications in the healthcare domain. To this end, further information was gathered through a systematic literature review. To deepen and strengthen the discussion with additional insights, we also complemented the analysis with interviews to experts in the joint domain of satellite technologies and healthcare. Eight experts, coming from a plurality of nations, institutions and backgrounds, whose professional focus is on the usage of satellite technology in the health sector, have been interviewed.
The analysis encompasses the key points emerged from the interviews, to extract key messages that could be inferred from the combined read of the projects, the systematic literature review, and the interviews.
The need of proper data storage systems to collect the relevant data, and the possibility to share it among system developers appears as the “conditio sine qua non” the system could operate. Moreover, the integration of census data, satellite data, aircrafts (drones and UAVs), data from sensors, health data, or end users data occurred in many projects. The ability to integrate and subsequently analyze the data requires great technical and analytical skills. Overall, it appears that data science capabilities are missing for the time being. Indeed, the data integration concept needs to be linked the need of training and generating new skills among developers, an issue belonging to the socio-cultural sphere. The interoperability of the systems is another issue to be addressed. Indeed, projects are often developed by a plurality of contractors who comes from many different countries, each providing their own technologies which might have different standards. Technical and political harmonization in retrieving data from multiple data sources and manage the complexity of its integration is therefore needed.
Cooperation and collaboration among the project stakeholders appear essential. According to the results, technological and organizational issues could be overcome, or at least, better addressed, through the establishment of public-private partnerships among the stakeholders. Indeed, the establishment of protocols, procedures, and practices to be shared among the stakeholders would foster the harmonious collaborations among them. It would also overcome legal concerns about issues of liability and licensing. Furthermore, the need of policies shared among nations fostering collaboration and cooperation would definitely benefit the overall ecosystem. Policies, regulations, and activities to reach a much greater political and legal cooperation among states are urged, since healthcare is a more and more inter-state and interdisciplinary domain.
This concept is very linked to the economic domain. Raising the awareness would be possible to target the right end-users with precise applications, fostering the trade-off between the profitability of a project with known customers and clients, and the overall benefits to the community of end-users. In this sense, according to our findings, a user-driven approach would be essential to benefits the whole sector.
This study may benefit academics, practitioners and public institutions to grasp the benefits and challenges in adopting satellite and geospatial data in healthcare. It provides a clear overview of the existing good practices highlighting barriers and opportunities. Our research strengthens the thesis of the importance of space technologies in the health domain as it has been state in the achievement of the SDGs by the United Nations.
Climate change has implications for human health, including altering the epidemiology of vector and water-borne diseases sensitive to environmental changes. Previous research has focused on how climate change can see expansions in suitable ranges, exposure or transmission however recent research focuses aim to quantify the impact of climate change on the evolution of bacteria causing such diseases, in terms of adaption, mutation and selective pressures. To explore the climate drivers of such evolutionary events, we require consistent, high resolution time series data such as that provided by Earth Observation, including the ESA-CCI Essential Climate Variables datasets. Understanding how the environment drives the emergence of new variants is of high importance to mitigate the impact of future pandemics, and to forecast how we might expect the epidemiology of climate-sensitive diseases to change amidst climate change.
We explore the environmental drivers of the emergence of a pandemic variant of Vibrio parahaemolyticus, in a novel framework that combines phylogenetic, epidemiological and Earth Observation data. Vibrio bacteria are an ideal example of a pathogen responding to climate changes; existing in water bodies and particularly coastal, brackish waters that are seeing extensive perturbations. Recent years have seen the transcontinental expansion of variants with pandemic potential and high virulence, which puts human health at risk. Specifically, Vibrio parahaemolyticus is the leading cause of seafood poisoning around the globe, with an estimated 0.5 million cases annually. Associations have been found between Vibrio bacteria and outbreaks with a range of environmental variables monitored using Earth Observation, including sea surface temperature, salinity, the presence of plankton to provide host protection, sea level anomalies, land surface temperature and soil moisture amongst others.
Our novel spatio-temporal dataframe allows us to explore spatio-temporal associations between evolutionary events identified through genetic analysis, climate drivers preceding these and outbreaks. Specifically, we pinpoint the drivers of the initial emergence of the pandemic strain from background populations. The workflow is applicable to other Vibrio bacteria and other climate-sensitive diseases where bacterial evolution may be affected by climate change. Further work will include integrating these datasets in machine learning applications to expand our forecasting abilities of future outbreaks. These will need to be developed into accessible public-health tools, completing the novel framework that extends from the initial analysis of inter-disciplinary data to applications ready for timely decision-making.
Climate change increases the likelihood and scale of environmental disasters, making the co-occurrence of disasters in the current context of COVID-19 highly probable. Large scale flood events, not only lead to socioeconomic losses but also cause people to congregate at evacuation centers, completely in contrast to the isolation requirements of the pandemic. While protecting vulnerable populations from catastrophic flood events requires the use of temporary shelters, they eventually contribute to increasing infection rates, since it is nearly impossible to maintain strict hygiene and social distancing regulations. Understanding the correlation between pandemic infection rates and flood inundation can help prepare the local health system in advance, thereby reducing fatalities caused by the lack of adequate healthcare during compound disasters.
Satellite-based Synthetic Aperture Radar (SAR) sensors provide weather and illumination independent images, with reasonable temporal frequency and spatial resolution, which are uniquely suited to mapping inundation. Studies have demonstrated the utility of SAR-images to map flood damage and connect these results with additional geodata, to understand the impact of floods on other areas of socioeconomic wellbeing.
This study investigates the correlation between people affected by floods and local COVID-19 cases in Bangladesh during Cyclone Amphan. Amphan was active in the North of the Bay of Bengal, between the 16th and 21st April in 2020, coinciding with the rising limb of the Covid-19 wave. The coastal area of Bangladesh was affected the most, leading to the evacuation of 2.2 million citizens into 12.000 temporary shelters. The investigated area covers approximately 38.300 km², including the divisions Barisal, Chittagong, Dhaka and Khulna.
Sentinel-1 (S1) SAR-data was used for a binary flood/non-flood classification, using a standard machine learning technique, Random Forest. Post-processing using topographic indicators based on Digital Elevation Models and Global Water Mask based on long-term Landsat optical water recurrence data was used to improve map accuracy. The S1-based flood maps resulted in overall accuracies of more than 90%, in comparison to optical Sentinel-2 based flood maps extracted using water indices. Multiple flood maps starting from two weeks before and until the end of the flood event were generated using all available data, to characterize the dynamic spatial evolution of the inundation extent over time. World population gridded datasets were subsequently used to estimate the affected citizens for each time step, as the spatial pattern of the flooding evolved.
The results show a lagged correlation between the affected people detected by the classification and the local COVID-19 cases, while infection rates increased by more than 70% on average. The trends of infections between coastal areas and inland regions, also display the impacts of the flood event, through differing trajectories. In future, predicted cyclone trajectories can be used in conjunction with shelter locations, to determine the expected rise in coincident viral infection numbers and better prepare the health systems for the upcoming onslaught, ultimately resulting in optimal resource utilization during emergencies.
Adequate sanitation, good hygiene and safe drinking water are the fundamental requirements to good health and socio-economic development. As part of the Indo - UK project REVIVAL, a preliminary door-to-door survey was carried out in 2019 - 20 among the people living on the banks of a large brackish and fresh-water lake, Vembanad Lake, along with an estimate of faecal contamination of lake water. Data on the incidence of acute diarrhoeal diseases (ADD) in the study area obtained from the Dept. of Health, Govt. of Kerala was also used to assess the influence of sanitation practice on the prevalence of water-borne disease in the area.
Some 221 data points were obtained from the Vembanad Lake region (VL) during the preliminary survey. The results showed that the percentage of population with no access to safe drinking water was minimal (1.8 %) and 83 % were provided with clean drinking water by the government, through either pipelines or tankers. Irrespective of the source, 97 % of the population used boiled water for consumption. However, open defaecation was more prevalent in the region (9.5 %). Furthermore, 42 % of the houses still had the indigenous type (bottomless stone ringed pits) of septic tanks rather than the ferrocement tanks promoted by the government as part of the Swachch Bharat Mission.
A general linear model analysis for fixed factors with 95 % confidence interval was used to find out if the sanitation practices had significant influence on the abundance of E. coli in the nearby water body. It was found that E. coli count was higher in those areas where the distance from the septic tank to the drinking water source was less than 25m; where flooding of septic tanks is common during rainy seasons; where open defaecation rate was high; and where the septic tanks were of the open-pit type. The analysis also revealed that these sanitation factors had a significant influence on the abundance of E. coli in the waters.
As the next step to relate the sanitation practice to the incidence of water borne diseases, the relevant sanitation practices were categorised qualitatively and plotted against the occurrence of acute diarrhoeal diseases (ADD) in the region. It was found that in those areas where septic tank flooding during monsoons and open defaecation practice were high, and the frequency of using disinfectants to clean toilet was low, incidences of ADD were high. But whether the septic tank was made of ferrocement or not did not have any influence on the occurrence of disease.
The study shows that the people in the study area are well aware of the essential hygiene practices, and that they adhere to safe practices of water consumption, but there is the possibility that environmental factors, mostly associated with climate variability and climate change, such as heavy rain, and floods upset normal life, and in many a case, lead to communicable diseases. Better prediction, management and mitigation in the event of natural disasters are needed to improve the sanitation status, and subsequently the health status of the people, as 6.3 % of the population loses more than 5 working days a year due to diarrhoeal diseases.
A mobile application 'CLEANSE' is being prepared as a freely downloadable app within the ESA project 'WIDGEON' and with plans to extend the survey as. a citizen science activity. A prototype of the mobile application CLEANSE has been designed to collect GPS information, time and date, along with sanitation data, when operated. The sanitation details required would be requested through objective questions in the application, so that the users could select their answers quickly and easily. The questions are designed taking into consideration the core questions recommended by WHO / UNICEF joint Monitoring programme for water supply, sanitation, and hygiene (2018) to be used in house-hold surveys to monitor WASH. The information collected and stored in the cloud would be regularly analysed to provide sanitation maps, which in due course would be expanded to generate dynamic sanitation maps.
The study shows the relevance of satellite-based communication tools such as smart phone applications in monitoring the health of a population, and in developing timely response to breakdown in sanitation facilities. The wide usage and acceptance of mobile phone applications ensures the success of the app and the citizen science programme. In a place like Kerala, where monsoonal rain and climate-change-induced storm surges and floods are becoming common, such technological advances are the need of the hour, to safeguard the people against the outbreak of water-borne diseases.
Water hyacinth invades freshwater systems such as lakes, rivers, and coastal lagoons and estuaries, and thereby affects the socio-economic status of these systems. The presence of water hyacinth can alter the water clarity and productive nature of aquatic systems, affect hydrological processes, and block ports and harbours. Water hyacinth and other floating vegetation can also form a habitat for vectors of dengue and Zika viruses and harbour pathogenic bacteria that could affect Human health. Eutrophication due to poor land use practices, and environmental and climatic factors are associated to the distribution of water hyacinth. This study presents an algorithm to map water hyacinth and other floating vegetation in Lake Vembanad, following a straight-forward approach using the floating algal index (FAI) based on the pronounced near-infrared (NIR) reflectance values caused by floating algae/vegetation. Lake Vembanad, situated on the southwest coast of India, is an area of natural outstanding beauty and protected under several national and international treaties. The lake forms an important resource for local communities, but is also under stress from various anthropogenic activities. Biodiversity and fisheries are on the decline, and invasive species, notably water hyacinth (Eichhornia crassipes), water moss (Salvinia molesta) and water lettuce (Pistia stratiotes), are particularly problematic. In the present study, we use high spatial resolution multi-spectral satellite data from Sentinel-2 MultiSpectral Imager (MSI) and Landsat-8 Operational Land Imager (OLI) to map water hyacinth. The atmospheric signal from both MSI and OLI data were removed based on the Acolite atmospheric correction using the dark spectrum fitting approach. The algorithm for mapping water hyacinth was trained using the reflectance spectra of the floating vegetation pixels extracted from the satellite data after being confirmed by visual interpretation of the generated RGB images and the spectral reflectance values. Different threshold limits of the FAI algorithm were tested based on specific features such as sediment accumulation in the surface water (especially at the location where Lake Vembanad is connected to the Arabian Sea), cloud edges, and moving objects such as ships and boats. To validate the satellite data, several field campaigns were carried out to collected in situ observations of water hyacinth across the lake during the satellite overpasses. Time-series analysis of the satellite data confirmed that the density of water hyacinth is relatively high around the Thanneermukkom Bund, a barrier that separates the freshwater-dominated south of Lake Vembanad from the more brackish water in the north. This is possibly related to the lack of fresh-marine water flushing caused by the bund and/or the excess nutrients from the use of fertilisers in agricultural fields adjacent to the southern region of Lake Vembanad. This study concludes that the developed approach could be used to quantify the distribution of water hyacinth and other floating vegetation in Lake Vembanad and possibly identify its source with further efforts in the future.
Schistosomiasis is an infectious neglected tropical disease caused by blood flukes of the genus Schistosoma occurring in 78 countries. In Brazil, schistosomiasis was documented in 1908, and in 1997 around 6.3 million inhabitants carried the parasite in the country. The etiological agent is Schistosoma mansoni, and the intermediate hosts are the aquatic species of the gastropod snails Biomphalaria glabrata, Biomphalaria tenagophila and Biomphalaria straminea. The disease transmission is closely related to environmental characteristics, especially lentic freshwater availability, and the nearness of inducing elements that favour the parasite circulation between human and snails populations. This study aims to classify schistosomiasis potential risk areas in the Middle Paranapanema watershed, Southeastern Brazil, using the Analytic Hierarchy Process (AHP). Five risk categories defined the areas according to the confluence of factors that make the infection prone: very high risk, high risk, medium risk, low risk and very low risk. Basic sanitation infrastructure data describing households deprivation conditions and their surroundings (lack of bathroom, garbage collection, main water supply, main sanitary sewage system or septic tank, and the existence of open sewage) were used for the potential risk mapping in conjunction with rural properties data, central irrigation pivots data, and satellite-derived drainage and land use-cover (LUC). INPE’s TerraHidro software was used to generate the drainage data from 30 m resolution STRM images. LUC data provided by a mapping project (MapBiomas, Collection 5) is based on annual composites of Landsat imagery. Only the most recent LUC data was incorporated in the analysis, but MapBiomas provide an historical time-series of annual maps from 1985 to 2019. The potential risk map consistency ratio was 0.0957, within the tolerable threshold of model inconsistency. Most of the areas classified as having a very high potential risk of infection coincide with areas identified as "other temporary crops" in LUC data. Also, part of the surface identified as very low potential risk overlaps the "forest formation" LUC class. Even having the lowest weight among the variables in the AHP, LUC showed to have a notable influence on the results. The validation process overlaid the potential risk map with the administrative boundaries of the municipalities with confirmed cases between 2012 and 2020. All of them had areas classified as very high and high potential risk of infection. The identification of areas vulnerable to a particular endemic is essential for planning health interventions. Thus, disease control can be performed with more localized and efficacious actions. In this context, satellite-derived data are useful to provide constant updates that support time-series analyzes and to assist studies at specific events, such as disease outbreaks. The possibility of incorporating the infected snail's location data obtained from fieldwork should be considered to improve the potential risk map accuracy. Furthermore, it is possible to use the same methodology to identify the potential risk of schistosomiasis in other areas and to study other diseases that also depend on environmental characterization.
Biomphalaria snails are the intermediate hosts of Schistosoma mansoni, a parasite that causes schistosomiasis disease in humans. They live in lentic freshwater and can survive in wet soil during the dry season, such as irrigated agricultural areas or vegetation areas near rivers and streams. In Brazil, three snails can act as the Schistosoma mansoni intermediate hosts: Biomphalaria glabrata, Biomphalaria straminea and Biomphalaria tenagophila. In 2007, the World Health Organization estimated ~2.5 million people infected with schistosomiasis in Brazil. Advancing our understanding of the coupled dynamics between land uses and the Biomphalaria snails habitat constitutes an important step towards improvements in surveillance and control strategies for schistosomiasis. The use of remotely sensed imagery is an alternative approach for habitat detection providing non-invasive, multi-temporal monitoring tools in an automated way, allowing to mapping locations that are difficult to access in the field. In this context, the Chinese Brazilian Earth Remote Sensing Satellite 4A (CBERS-4A) has a 31-day repeat cycle, providing images with a spatial resolutions of 2 m in the panchromatic band and 8 m in 4 multispectral bands from the Wide Scan Multispectral and Panchromatic camera (WPM). This study aims to use a WPM CBERS-4A image and a Geographic Object-Based Image Analysis (GEOBIA) approach to detect inland water bodies, wetlands and croplands in Ourinhos region, Southeast Brazil, as potential Biomphalaria habitats. We considered that the GEOBIA approach could improve the classification results by considering snail’s habitats as internally homogeneous objects but distinct from their neighborhood. A level 4 WPM CBERS-4A image acquired on May 8th, 2020, was used in this analysis. The water surface delineation was obtained using the normalized difference water index (NDWI). The normalized difference vegetation index (NDVI) was also calculated and integrated with the other WPM bands in the image processing. Trimble eCognition 9 software was used for the multiresolution segmentation of the satellite image using NDWI, NDVI, NIR and PAN bands with a weight of 4 instead of 1 used for the other bands. The GeoDMA 2.0.3 toolbox using C5.0 Decision Tree, embedded in INPE’s TerraView 5.6.1 software, was used in the image classification to extract object features and to build automatic decision trees for object classification using data mining techniques. A land use-cover (LUC) map of Ourinhos region was generated based on the GeoDMA classification result. The LUC map considered 8 thematic classes: (C1) Unidentified agricultural area used for agricultural management; (C2) Forest; (C3) Wetland - flooded areas with vegetation; (C4) Urban areas; (C5) Cropland; (C6) Inland water bodies - lakes and ponds; (C7) Rivers; (C8) Non-tree vegetation. The global accuracy of the classification was 90%. The majority of the detected inland water bodies (25) had up to 4,264 m², with the smaller one having 64m², and with an average size of 17,300m². These results indicate that the spatial resolution is relevant in the process outcome as targets with smaller sizes were not detected. The use of the water index NDWI allowed a satisfactory delineation of the water bodies, especially in the identification of the lakes and ponds. In the automatic process of decision trees applied for image classification, the spectral metrics were the most recurrent, while the spatial properties were relatively less used. However, the spatial properties were relevant considering that some inland water bodies and rivers have similar spectral characteristics but different geometric attributes. Due to the host's complexity and the diversity of the conditioning factors of its habitat, the control of schistosomiasis depends on preventive actions, including the location of potential home ranges for these snails. The methodology presented in this study provides a preliminary approach to identify these snail's potential habitats and can be applied in larger areas as a future perspective for this applied research.
Waterborne diseases are present all over the world and are generally associated with poor socioeconomic and sanitation conditions. The use of environmental remote sensing data combined with geoprocessing techniques has been growing over the last few decades, especially in the area of spatial epidemiology in which it has been a key-stone for assessing environmental factors related to social disease transmission. Leptospirosis is an infectious disease whose etiological agent is the Leptospira bacterium. This disease has a global distribution and still poses big challenges to its control and policy strategy planning, being a health concern mainly in many regions in developing countries, where sanitary infrastructure is generally deficient, socioeconomic conditions are crictical and clear water accessibility is limited. In an effort to better understand the main factors associated with leptospirosis transmission in an endemic region, this retrospective descriptive study applied statistical and geoprocessing techniques in two municipalities of Pará state, northern Brazil: Abaetetuba and Barcarena. Here, the socio-epidemiological profile of the disease has been assessed. This study encompassed a period of 13 years (2007-2019). The epidemiological data were obtained from the Information System for Notifiable Diseases (SINAN) of the Pará State Department of Public Health (SESPA). The sociodemographic and geopolitical divisions datasets were obtained from the Brazilian Institute of Geography and Statistics (IBGE). Remotely-sensed environmental data were acquired by means of Google Earth Engine (GEE) and derived from three main sources: NASA Shuttle Radar Topography Mission (SRTM); Japan Aerospace Exploration Agency's (JAXA), and European Centre for Medium-Range Weather Forecasts (ECMWF). The environmental variables included surface runoff, soil temperature, air temperature and soil water volume. After depuration of the epidemiological dataset, a total of 56 cases were positively evaluated; after georeferencing, only 51 remained. In respect to the environmental variables, each dataset was pre-processed both in time and in spatial domain: in the time-domain, all environmental variables were daily averaged; in the spatial-domain, a 10km buffer raius around each georreferenced notification point was applied for respective averaging the daily environmental data. The socio-epidemiological profile of individuals affected by the disease was characterized by a descriptive analysis. The spatial analysis included spatial cases distribution and incidence rate calculation per census tract, identification of risk areas/hotspots by Kernel density tool, and spatial autocorrelation analyses by means of the Global and Local Moran’s indexes. The associations between leptospirosis incidence and environmental and sociodemographic factors were analyzed via a generalized linear regression model. Results evidenced different annual trends in respect to the number of positive notifications (PN) for each municipality. While Barcarena presented a stable annual trend, Abaetetuba has shown a positive trend. PN also presented an inter-annual ondulatory pattern with a periodicity of approximately 4 years, once both municipalities reports were integrated. Results indicated that the incidence of leptospirosis occurs primarily in urban and densely occupied areas, with the municipality of Abaetetuba being the most affected with the highest number of cases and incidence throughout the study period. In respect to temporal variability, it was observed that there was an intra-annual variation in PN, with greater values in the first semester, especially between February and May, and a second lower spike in December. The socio-epidemiological characterization evidenced that self-declared brown men aging between 30 and 59 were the ones most affected by the disease despite one’s municipality. Only 7.14% of all PN were female. Laboratory diagnosis (62.50%) and hospitalization (79.25%) confirmed the high necessity of hospital care for the patients with leptospirosis. Locations with signs of rodents (71%), flooding (57.14%) and garbage or rubble present in the surroundings (48.21%) were the environmental factors mostly related to the disease transmission. These combined factors form optimal conditions for the development of the disease‘s vector, and a consequent increase in risk transmission. An inequality in the access to the water supply network was also observed in Abaetetuba. The local Moran‘s Index was 0.372, indicating the existence of positive spatial autocorrelation between census tract PN. Remote sensing environmental data and geoprocessing techniques were deemed essential for identifying areas at risk of getting leptospirosis. The statistical regression evidenced that the surface gradient (slope) and the garbage accumulation in the surroundings were the variables most related to the disease transmission. This study reinforce the importance of: a) integrating remote sensing data to epidemiological studies; b) investing in sanitation and infrastructure conditions in order to improve human health, especially in developing countries. As populations grow over time, these conditions need to be well addressed to increase resilience to disease outbreaks, especially waterborne diseases.
Session: D2.11 Earth observation for health
Making inferences on water quality based on the smart phone camera images generated by citizen scientists
Ancy C Stoy*1, A Gopalakrishnan1, Grinson George1, Nandini Menon2, Mini K.G1., Anas Abdulaziz3, Sara Xavier1, Arya P Kumar1, Pranav P1, Anju R1, Sreepriya V1 and Shubha Sathyendranath4
1ICAR-Central Marine Fisheries Research Institute, Cochin 682018
2Nansen Environmental Research Centre (India), Kochi, India, 682506
3CSIR-National Institute of Oceanography, Regional centre, Cochin 682018
4Plymouth Marine Laboratory, Plymouth, Devon, UK
*Presenting author: ancycstoy17@gmail.com
In this study, we focus on the ecosystem health status of Vembanad-Kol, a wetland of international importance, as well as a Ramsar site, in the state of Kerala, India. To overcome the difficulties of conventional ways of in situ water sampling and traditional ship sampling methods, which often take up a major share of our research, we propose a novel approach to test the waters by integrating citizen science, smart phone cameras and a mobile application named ‘TurbAqua’ (available on Google play store). The mobile app was developed by the Indian Council for Agricultural Research-Central Marine Fisheries Research Institute, Kochi (ICAR-CMFRI) as part of an India-UK collaborative project "REhabilitation of Vibrio Infested waters of VembanAD Lake: pollution and solution (REVIVAL)". A successful monitoring team of citizen scientists was established with much success for the Vembanad Lake with ICAR-CMFRI as the host institution (Grinson et al., 2021). 3D printed Mini Secchi Disk (MSD) attached with the Forel-Ule (FU) colour scale (Brewin et al., 2019) was used for the citizen science activity, wherein a smartphone, in which the ‘TurbAqua’ mobile application is installed, serves as an electronic log sheet for entering the data collected during the MSD operation. In addition, the smart phone camera acts as a remote sensor for capturing an image of the water surface. The data as well as the images get stored on the CMFRI server.
Colour of water is a key to its quality. Rather than an attribute of water with only an aesthetic value, it is an important apparent optical property, which is influenced by the human eye perception. Each uploaded image, showing the colour of a small patch of water can be used to generate the hue angle (α) of the water. According to the International Commission on Illumination (CIE), hue colour angle is a parameter used for the determination of colour. This information, along with the FU index of water colour, is a promising way for rapid assessment and understanding of colour dynamics of coastal waters. In the present study, images stored on the server were subjected to an automated analysis using Machine Learning (ML) algorithms integrated with Artificial Intelligence (AI) to provide smarter solutions for addressing water-quality issues. An open source algorithm called WACODI: Water COlor from Digital Image algorithm (Novoa et al., 2015) was used to derive the intrinsic colour of the photographed patch of water body from digital images.
Our preliminary results point to a high level of eutrophication in the lake. Most of the water surface images (88.30%) belong to the FU index 14-17, indicating that the Vembanad Lake waters could be classified as ‘Greenish brown to brownish green’ on the FU colour index. The colour expressed mainly as hue colour angle, ranged from 47° to 76° respectively. Water bodies with smaller hue angles tend to be cleaner, and those with larger hue angles tend to be murkier (Zhao et al., 2020). The results clearly illustrate the presence of phytoplankton, suspended sediments and Dissolved Organic Matter (DOM). A single image with ‘Greenish blue to bluish green’ colour (6-9 FU scale) was obtained indicating algal presence with minimum sediment matter. Earth observation satellite missions have revolutionized scientific endeavors and now smartphones equipped with sensors and applications are all set to transform science by developing such Do It Yourself (DIY) tools and activities. The ongoing COVID 19 pandemic reinforces the need for activating similar participatory virtual domains for creating a ‘living lab’ experience with high frequency of observations and at low costs. Such observations can fill gaps in data during cloud cover, when satellite data become unavailable.
Keywords: Citizen Science, Smartphone, Digital images, Water colour, Vembanad Lake
References
1. Brewin, R. J., Brewin, T. G., Phillips, J., Rose, S., Abdulaziz, A., Wimmer, W., ...& Platt, T. (2019). A printable device for measuring clarity and colour in lake and nearshore waters. Sensors, 19(4), 936 https://doi.org/10.3390/rs13091683
2. George, G., Menon, N. N., Abdulaziz, A., Brewin, R. J., Pranav, P., Gopalakrishnan, A., ...& Platt, T. (2021). Citizen scientists contribute to real-time monitoring of lake water quality using 3D printed mini Secchi disks. Frontiers in Water, 3:662142 https://doi.org/10.3389/frwa.2021.662142
3. Novoa, S., Wernand, M., & van der Woerd, H. J. (2015). WACODI: A generic algorithm to derive the intrinsic color of natural waters from digital images. Limnology and Oceanography: Methods, 13(12), 697-711https://doi.org/10.1002/lom3.10059
4. Zhao, Y., Shen, Q., Wang, Q., Yang, F., Wang, S., Li, J., ... & Yao, Y. (2020). Recognition of water colour anomaly by using Hue Angle and Sentinel 2 image. Remote Sensing, 12(4), 716 https://doi.org/10.3390/rs12040716
Session: D2.11 Earth observation for health
Bio-optics and Remote sensing: Exploring New Opportunities for Assessing Health Status of a Waterbody
Anju R1*, Nandini Menon N2, Sara Xavier1, Arya P Kumar1, Pranav P1, Grinson George1 Anas Abdulaziz3, Shubha Sathyendranath4
1 ICAR-Central Marine Fisheries Research Institute, Cochin 682018
2 Nansen Environmental Research Centre (India), Kochi, India, 682506
3 CSIR-National Institute of Oceanography, Regional centre, Cochin 682018
4 Plymouth Marine Laboratory, Plymouth, Devon, UK
* Presenting author:anjurajprameela@gmail.com
Vembanad Lake in Kerala is part of a large wetland system and is one among the 46 Ramsar sites in India. The lake stretches through multiple districts in Kerala and has quite an intricate network of other water bodies connecting to it and sprouting from it along its path. This network includes inflowing rivers and out flowing canals, and also the lagoons that have their roots in it. The region around the lake is densely populated and a majority of the population depends on the lake for their sustenance. Extensive anthropogenic activities such as fishing, aquaculture practices, tourism as well as domestic and industrial waste disposal have adversely impacted the water quality of the lake and have contributed to the outbreak of water-borne and vector borne diseases such as cholera and malaria in and around the lake area. Bio-optics, which is the science that deals with the interaction of light with living organisms and dissolved and suspended particles in an aquatic environment, offers an opportunity for monitoring the health of a water body. The main optical constituents are chlorophyll-a (Chl-a), suspended particulate matter (SPM) and coloured dissolved organic matter (CDOM). Chl-a is a measure of phytoplankton biomass. Anas et al. (2021) have observed that Vibrio cholerae bacteria showed presence in filtered water and is also found in association with phytoplankton and zooplankton in the Vembanad Lake. They also pointed out that the likelihood for the presence or absence of the bacteria can be represented as a function of chlorophyll concentration in the water, such that risk maps for the whole lake could be generated utilizing the chlorophyll data derived from satellite. CDOM acts as an indicator of terrestrial freshwater supply and degradation of phytoplankton, and SPM a marker of land runoff and wind-driven resuspension of sediments. In situ measurements of bio-optical properties were performed over a span of two years from April 2018 to May 2020 by collecting water samples from 13 stations across the lake. The ternary diagram showed that the detritus matter made the highest relative contribution to the absorption budget most of the year and at most of the stations, depicting that the lake is a typical Case 2 water body (defined as those waters in which substances other than phytoplankton vary independently of phytoplankton, according to Morel and Prieur 1977, Sathyendranath and Prieur, 1981). Sources of detritus include decomposing aquatic organisms and vegetation; sewage and other solid wastes disposed into the water body; inorganic particles containing adsorbed bacteria; particles washed in from the coast or brought into resuspension by vertical mixing; and also particles generated by the churning of the water during boat traffic. In addition, agricultural waste, household sewage, and sand mining also add to the detrital load. CDOM content in the lake, which may be produced locally by the ecosystem or transported there by rivers or land drainage, can also reflect the watershed contamination level to a certain extent. Remote sensing can enhance the spatial and temporal monitoring of water quality. Remotely-sensed data can be utilized to garner information for supporting the health of an aquatic system. The present study aims at exploring the link between bio-optics and remote-sensing data for monitoring the health status of Vembanad Lake, water and vector borne disease outbreak and thereby improve the health of the community residing around the lake.
Key words: Vembanad Lake, Bio-optics, Remote sensing, Health status
References
1. Anas, A., Krishna, K., Vijayakumar, S., George, G., Menon, N., Kulk, G., ...&Sathyendranath, S. (2021). Dynamics of Vibrio cholerae in a typical tropical lake and estuarine system: potential of remote sensing for risk mapping. Remote Sensing, 13(5), 1034.
2. Morel, A., & Prieur, L. (1977). Analysis of variations in ocean color 1. Limnology and oceanography, 22(4), 709-722.
3. Prieur, L., & Sathyendranath, S. (1981). An optical classification of coastal and oceanic waters based on the specific spectral absorption curves of phytoplankton pigments, dissolved organic matter, and other particulate materials 1. Limnology and Oceanography, 26(4), 671-689.
Wildfires increasingly threaten human health and infrastructure with consequences for forestry, agriculture, and biodiversity. Predictions show that climate change will likely increase the wildfire frequency and severity in the Alpine region. Providing high-quality data to estimate fire danger can improve resource planning of decision-makers and the timing and quality of early warnings for society.
Forest fire danger forecasts are based on empirical or physical models which estimate the moisture levels of fuels as a function of weather conditions. These forecasts often use indices based on meteorological data, such as the Canadian Fire Weather Index (FWI). However, meteorological forecasts are typically only available at relatively coarse spatial resolutions (up to ca. 1 km), and therefore, of limited use in mountain regions with complex topography. Also, other factors, such as vegetation type and structural elements and the role of humans causing ignitions, are often not considered. Therefore, there is a need for an integrated wildfire danger assessment for Austria.
The CONFIRM project, which started in December 2019 with funding from the Austrian Research Promotion Agency (FFG), tries to address this gap and develops a novel, high-resolution, and satellite-supported integrated forest fire danger system (IFDS) for Austria. For that purpose, radar and optical satellite data from the Copernicus Sentinel-1 and Sentinel-2 missions, airborne Laserscanning (ALS), socio-economic data, and topographic properties are used next to meteorological data. The project uses two independent methods: (i) an expert-based approach that allows a combination of various data layers with different weightings and (ii) a machine learning approach. Key stakeholders from national weather services, fire brigades, state forest administrations, and infrastructure providers are providing feedback on the prototype of the IFDS according to their needs and requirements.
Here, we present the results of the machine learning approach for a study site covering the state of Styria (ca. 16 400 km ²). Several machine learning techniques have already proven suitable in similar studies (e.g. Random Forest and Maxent) are employed. We used satellite-derived moisture indicators and tree species classifications, ALS-derived vegetation structure parameters and irradiance, topographic and socio-economic data, and meteorological variables as input features to estimate fire danger. The predictors were trained using forest fire events from the Austrian forest fire database, which occurred between 2016 and 2021. The precision metrics used in the course of spatial cross-validation show that the best performing model manages to predict high fire danger for the majority of fire events.
Mountain forest ecosystems are essential for society, providing timber, protection and recreational value. They are characterized by high spatial heterogeneity and complexity in functions and structures, yet their functioning and structure is impacted by climate and land use change. This results in an increasing acceleration of ecosystem dynamics in mountain forest ecosystem, such as the Alps, and which might change their capabilities to provide essential ecosystem services we as society rely on. Thus, monitoring and understanding mountain forest ecosystems is urgently needed, yet it is still challenging despite a growing access to environmental data. The launch of NASA‘s Global Ecosystem Dynamics Investigation (GEDI) in 2018 is expected to make a major contribution to the monitoring and analysis capabilities of forest ecosystems globally, especially with regards to their vertical structures and their relationships to biodiversity and ecosystem functioning. For the first time in Earth observation, there is a spaceborne LiDAR sensor available, specially designed for measuring ecosystem structure, which is a key parameter in many ecological applications. Although these new data provide unprecedented opportunities to assess forests’ vertical structure over large spatial extents, their quality and usability in mountain ecosystem remains to be validated. In addition, the accuracy of GEDI data is a critical issue, as the geolocation of waveforms is known to be inaccurate. For those reasons, we validate GEDI data in terms of their potential to describe canopy structural variability of alpine forests using airborne LiDAR data as reference. We assess and compare a comprehensive set of structural metrics, including height, area and density, arrangement, cover and openness and heterogeneity for two mountain landscapes in the Bavarian and Swiss Alps. In this way, the performance of large-footprint GEDI data can be tested and validated in comparison to high-resolution and area-wide airborne LiDAR data in challenging mountainous terrain. Besides the plot-level based analyses, we assess the potential of GEDI data to capture the structural characteristics of forests also at the landscape level. By considering both the plot and the landscape level, we discuss application possibilities in the domain of forestry, resource management and landscape ecology, as well as potential challenges and limitations of using GEDI for monitoring mountain forests more broadly.
The Environmental Research Station Schneefernerhaus (UFS) was established in 1999 and is Germany’s highest research station at 2652 meters, just below the summit of Zugspitze.
Researchers from many different institutions are here conducting ongoing measurements or are working on innovative studies.
The Schneefernerhaus is not only a research center, but also offers the possibility to host workshops or seminars with a scientific focus or within the context of sustainable education.
In addition to the ten members of the consortium, who have permanently rented labs, there is also the possibility for all interested every interested scientist to apply with a research project to the UFS Science Team, and to use the facilities at the Schneefernerhaus after approval.
Representatives from the members of the consortium are building the Consortium Board, which deals with policy issues of the consortium.
The key scientific areas are:
- Regional Climate and Atmosphere
- Satellite-based observations and early detection
- Cosmic radiation and radioactivity
- Hydrology
- Environmental and high-altitude medicine
- Global Atmosphere Watch
- Biosphere and Geosphere
- Cloud dynamics
A Science Team, made up of representatives from each of the key scientific areas, meets periodically to ensure the scientific quality the research.
Furthermore the Schneefernerhaus hosts one of the 31 stations of the World Meteorological Program ‘Global Atmosphere Watch’. The programme focuses on building a single coordinated global understanding of atmospheric composition, its change, and helps to improve the understanding of interactions between the atmosphere, the oceans and the biosphere. Twice a year the GAW training and education center (GAWTEC) is held at the Schneefernerhaus for two weeks. The courses are meant only for technicians and junior scientists who work at stations with instruments and data.
Since 2012 the Schneefernerhaus is a partner of the Virtual Alpine Observatory (VAO), a network of European High Altitude Research Stations based in the Alps and similar mountain ranges. This cross-border and interdisciplinary cooperation has made it possible to address in great depth scientific problems relating to the atmosphere, biosphere, hydrosphere and cryosphere systems, and also the possible impact of environmental influences on health.
The presentation will present the Schneefernerhaus and highlight recent research topics. A special focus will be on the possibilities for other research teams and institutions to take advantage of this unique research institution.
Grasslands provide key ecosystem services comprising food production, water supply and flow regulation, carbon storage, erosion control, climate risk mitigation, pollination, cultural amenity and support a high biodiversity. In the Alpine region, grasslands are used as meadows and pastures across a large elevational gradient, their management ranges from extensive to intensive, varies strongly in space and time and is often unknown for large areas. Due to the large coverage of grasslands and their provision of manifold ecosystem services, information about grassland management intensity is important for a range of stakeholders including agricultural agencies and nature protection. Within the Alpine Regional Initiative of ESA, we address those needs in the Eco4Alps project and develop a cloud-based operational grassland management service to map the timing and number of mowing events as an indicator for grassland management intensity for the Alpine region.
Optical EO-based mowing detection as a proxy for grassland management has gained considerable attention in the last years and several approaches have been developed (Griffiths, et al., 2020) which rely on the detection of mowing events based on an abrupt decline in the intra-annual vegetation signal. In general, there is consensus that a dense timeseries is significantly improving the grassland mowing event mapping performance. While the additional integration of radar data is challenging (De Vroey et al., 2021), the use of combined optical timeseries has shown promising results especially with harmonized Landsat/Sentinel products available at 30m (Griffiths et al., 2020) and 10m (Schwieder at al., 2021).
Whereas our service makes use of existing methods in the optical domain it particularly considers the peculiarities of the Alpine region with its complex topography, small-scaled structured landscapes and high and persistent cloud cover. We show the concept and assessments performed to establish the Alpine grassland mowing event service for the province of South Tyrol, Italy.
We use a curve-fitting approach to model the seasonal vegetation growth based on vegetation indices and identify potential grassland mowing events, where observations significantly differ from this idealized trajectory. These potential mowing events need to fulfill further criteria to be labeled as mowing events such as a minimum timespan to the preceding mowing event and a plausibility check based on elevation-dependent rules. We tested different Sentinel data products to assess the trade-off in spatial-temporal resolution for grassland mowing detection performance including Sentinel-2 only, the Harmonized Landsat-Sentinel product and the newly released sen2like fusion (Saunier et al., 2019) product, which offers data at 10/20 m resolution. We additionally evaluated how the choice of vegetation index affects the detection performance and tested different vegetation indices such as the NDVI and EVI. To validate our grassland mowing results, we setup a webcam-based database with visually interpreted mowing events of overall 300 grassland fields for the years 2017-2020. We applied the method on the pixel level and integrated the results to the parcel-level based on local cadaster information and derived spatial maps of mowing events for the time range 2017-2020.
References:
Claverie, M., Ju, J., Masek, J. G., Dungan, J. L., Vermote, E. F., Roger, J.-C., Skakun, S. V., & Justice, C. (2018). The Harmonized Landsat and Sentinel-2 surface reflectance data set. Remote Sensing of Environment, Volume 219, 145-161, https://doi.org/10.1016/j.rse.2018.09.002.
De Vroey, M., Radoux, J., Defourny, P. (2021): Grassland Mowing Detection Using Sentinel-1 Time Series: Potential and Limitations. Remote Sensing, 13(3):348. https://doi.org/10.3390/rs13030348.
Griffiths, P., Nendel, C., Pickert, J., Hostert, P. (2020): Towards national-scale characterization of grassland use intensity from integrated Sentinel-2 and Landsat time series. Remote Sensing of Environment, Volume 238, 111124, https://doi.org/10.1016/j.rse.2019.03.017.
Saunier, S.,Louis, J., Debaecker V. et al., "Sen2like, A Tool To Generate Sentinel-2 Harmonised Surface Reflectance Products - First Results with Landsat-8," IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019, pp. 5650-5653, doi: 10.1109/IGARSS.2019.8899213.
Schwieder, M. Wesemeyer, M., Frantz, D, Pfoch, K., Erasmi, S., Pickert, J., Nendel, C., Hostert, P. (2021): Mapping grassland mowing events across Germany based on combined Sentinel-2 and Landsat 8 time series. Remote Sensing of Environment, 112795, https://doi.org/10.1016/j.rse.2021.112795.
Glaciers play an important role in our society by providing freshwater that can be used for domestic, industrial and agricultural applications. With their meltwater, glaciers also marginally contribute to sea-level rise but also guarantee the survival of specific mountain ecosystems. Furthermore, they represent one of the key indicators of climate change at global and local scale since they are highly influenced by temperature and snow precipitation changes. Recent observations have shown that during the 21st century the rates of glacier retreat and mass loss have been accelerating globally. Glacier and permafrost retreat has been reported in all sectors of the Alps, where the retreat rates exceed 1.2% per year. Most glaciers in the Alps are smaller than 1 km2, which puts them at greater risk of disappearance. Overall, Alpine glaciers are projected to lose between approximately 35% and 90% of their area and volume by the end of the century, depending on different levels of warming in climate scenarios. Continuous worldwide monitoring of glaciers is therefore necessary to better understand their morphological evolution over time, foresee future freshwater availability and monitor climate change. Among the parameters adopted for glacier monitoring, mass balance is one of the most important. It refers to the mass of ice that is gained and lost during a hydrological year. When using remote sensing techniques, the mass balance can be computed from the total volume change of a glacier that occurs between two consecutive survey epochs (i.e. the multi-temporal difference of the glacier Digital Surface Models - DSMs). Over the past decades, different approaches have been investigated in the field of glacier monitoring (i.e GNSS surveys, Terrestrial Laser Scanning - TLS - and Unmanned Aerial Vehicles - UAVs -). Although GNSS and TLS surveys are very accurate, they require access to the glacier with possible logistic difficulties and are generally more time consuming. In addition, it is usually difficult to cover wide and inaccessible areas. Compared to these techniques, UAV platforms enable collection of data over wide and inaccessible areas in an efficient and cost-effective way. The combination of UAV platforms with Structure-from-Motion and dense matching photogrammetric techniques allows reconstructing the 3D point cloud and the corresponding 3D surface of the glacier environment with generally high accuracy and efficiency. However, when surveying large glacier environments the procedure can be time consuming and still limited to small areas.
To address the problem related to large-scale monitoring and when the main goal is volume change computation, satellite data can be adopted since they are independent from logistic constraints and can provide reliable measurement with good resolution at large scale. One of the main drawbacks of this approach is the fixed temporal resolution, which often causes sub-optimal acquisition conditions. However, in recent years nanosatellites have been launched, which are much more flexible as regards temporal resolution. Therefore, the massive availability of EO and geospatial data (e.g. EU Copernicus with Sentinels and Contributing Missions, Pléiades, IKONOS and PlanetScope) is now a very valuable resource for global and continuous monitoring of glaciers. Among the different kinds of available satellite data, optical imagery offers the opportunity to measure key variables for glacier monitoring and obtain 3D point clouds and DSMs using photogrammetric techniques for dense surface matching. Finally, they also offer the opportunity to obtain archive images for retrospective analyses.
Exploiting the potential of optical data to monitor the volume change of an Alpine glacier, in this study we focus on high-resolution optical imagery and we present the results of a multi-temporal comparison of DSMs over a glacier area. The study area is Forni Glacier, an important geosite located in Stelvio Park (Italian Alps). The glacier has an area of 11.34 km2 based on the Italian Glacier Inventory (data from 2007) and an altitudinal range between 2501 and 3673 m a.s.l.. This valley glacier has retreated markedly since the Little Ice Age, when its area was 17.80 km2 with an acceleration of the shrinkage rate over the past three decades, typical of valley glaciers in the Alps. It has also undergone profound changes in its dynamics in recent years. The glacier has been monitored for many decades with in-situ surveys and in the last years also using UAVs; for this reason, it represents a good case study for multi-temporal DSM reconstruction and volume change estimation.
Here, we present the results achieved by investigating the use of optical satellite image pairs acquired with high-resolution optical Pléiades and IKONOS sensors (0.5x0.5 m and 0.7x0.7 m GSD, respectively). Pléiades image pairs, acquired in 2009 and 2013, and IKONOS image pairs, captured in 2016, are provided with RPC data and allow for DSM generation and multi-temporal analysis over the area of interest with a high level of detail. The processing for DSM generation is performed using MicMac, an open-source software. Then, the volume change over seven years is evaluated after coregistrating the generated DSMs with different coregistration approaches. The results highlight the loss in volume of the Forni glacier in the analysed period and demonstrate the potential of satellite-based approaches for 3D reconstruction and multi-temporal glacier monitoring over large areas using both high-resolution optical images.
Clouds and cloud shadows are one of the major limitations for the use of optical remote sensing imagery. Cloud and cloud shadow contamination not only limits the use of optical data in time-sensitive applications (e.g. agriculture) but more generally often drastically reduces the amount of available data. This is particularly true in tropical or Monsoon regions where frequent cloud cover can prevent acquisition of uncontaminated observations over weeks or even months [1].
Multi-satellite constellations such as the Copernicus programmes’ Sentinel missions as well as fleets of small satellites from companies like Planet help mitigate this problem by decreasing revisit times and drastically increasing the number of observations for any given area. Combined with effective cloud and shadow detection algorithms, this allows selecting observations with low contamination and construct cloud- and shadow-free data subsets. Nevertheless, although this is sometimes sufficient, it is not enough to enable applications relying on consistent time series or producing datasets for highly sensitive tasks [2]. Furthermore, it is not optimal to omit large portions of potential data due to minor contaminations. Instead, it would be desirable to make use of as much of the data as possible.
The challenges posed by clouds and cloud shadows are often addressed simultaneously or treated as a joint problem. They are, however, fundamentally different:
1. most clouds completely obscure the image leaving no information about the underlying surface (essentially causing data gaps) while shadows reduce and (due to scattering) alter the spectral reflectance observed but always leave at least a remnant of the original information;
2. clouds are only a serious problem in satellite remote sensing and do not affect airborne or UAV observations while shadows are present independent of the platform making them a more universal challenge in the remote sensing field.
It is therefore crucial to address the task of shadow removal or deshadowing to reconstruct remote sensing imagery and support its application in different domains. In contrast to cloud removal which usually requires additional information from previous observations or other sensors, shadows can be effectively removed from imagery without the need for external priors or long image time series.
The topic of de-shadowing is a subset of the broader domain of image reconstruction and enhancement. As such, it is not unique to remote sensing but has been studied in the broader image processing community for some time [3]. A range of solutions have been proposed, incl. histogram-based image enhancement, corrections based on color transforms, image region matching, texture- and similarity-based optimization, geometric models and, more recently, deep learning.
A major limitation of many of these approaches is that they require some degree of training or fine-tuning. In the case of deep learning, this often requires large amounts of labeled datasets and excessive processing capacity. In addition, a model trained on data of a specific sensor or region may not perform well on images from other platforms or a different context. Given the limited availability of labeled remote sensing ground truth data as well as the fact that remote sensing imagery can be very diverse and come from various different platforms and sensors, it is desirable to explore non-supervised techniques. Ideally, a deshadowing algorithm would work directly on the image without any external information from other observations or sensors.
The established and well-researched technique of Cellular Automata offers an opportunity for interesting approaches to this task. In the domain of image processing, Cellular Automata have been used for a variety of topics from image denoising to enhancement and segmentation [6, 7]. They offer multiple advantageous characteristics over many other applications:
1. owing to their evolutionary nature, Cellular Automata can produce very complex emergent behavior based on a set of rather simple and easy to implement rules;
2. their iterative structure allows to dynamically transport information through the image “through time”;
3. the application is by nature entirely parallelizable, aiding processing speed and scalability;
4. they can be applied in a semi- or unsupervised way processing an image without the need for excessive supervision or training.
The use of Cellular Automata in remote sensing is, however, still mostly unexplored. One reason for this may be that traditionally Cellular Automata are described and used for binary problems. Most of the literature therefore addresses (limited-size) discrete value problems with only a few states. Since remote sensing imagery is usually processed as real-valued reflectance data, the potential of Cellular Automata may not be immediately obvious. Traditional implementations further pose the issue of a large set of possible update rules that need to be selected to achieve a certain behavior. Multi-stage or continuous versions exist but have so far not received as much attention [6].
In this work, we propose a solution for cloud cast shadow detection and removal based on continuous Cellular Automata. By treating the update step essentially as a real-valued evolutionary update akin to Differential Evolution or Particle Swarm Optimization, we can adapt the application to remote sensing reflectance data and avoid the limitations of conventional multi-state approaches, such as the time-consuming prior learning or selection of appropriate update rule sets. This relaxation further allows to expand the possibilities of using other, non-discrete inputs such as texture or image gradient information.
This concept shares some similarities with graph-based processing techniques treating the image as a grid of nodes in an undirected graph. In this interpretation, the evolution of Cellular Automata can be regarded as the equivalent to applying a change to certain nodes and propagating the information through the graph. By reformulating the task in the form of Cellular Automata evolving iteratively, however, updates can be applied fully in parallel to the whole image and the information is dynamically conveyed through the image over time.
We develop and present the approach primarily based on Sentinel-2 imagery but, in theory, it can be applied to any kind of (earth observation) image data.
References
[1] Wu, Y., Fang, S., Xu, Y., Wang, L., Li, X., Pei, Z., & Wu, D. (2021). Analyzing the probability of acquiring cloud-free imagery in China with AVHRR cloud mask data. Atmosphere, 12(2). https://doi.org/10.3390/atmos12020214.
[2] Prudente, V. H. R., Martins, V. S., Vieira, D. C., Silva, N. R. de F. e., Adami, M., & Sanches, I. D. A. (2020). Limitations of cloud cover for optical remote sensing of agricultural areas across South America. Remote Sensing Applications: Society and Environment, 20. https://doi.org/10.1016/j.rsase.2020.100414.
[3] Rosin, P., Adamatzky, A. & Sun, X. (2014). Cellular automata in image processing and geometry. Springer International Publishing, Switzerland. 10.1007/978-3-319-06431-4.
[4] Shahtahmassebi, A., Yang, N., Wang, K., Moore, N., & Shen, Z. (2013). Review of shadow detection and de-shadowing methods in remote sensing. Chinese Geographical Science, 23(4), 403–420. https://doi.org/10.1007/s11769-013-0613-x.
[5] Chondagar, V., Pandya, H., Panchal, M., Patel, R., Sevak, D., & Jani, K. (2015). A review: shadow detection and removal. International Journal of Computer Science and Information Technologies, 6(6).
[6] Dioşan, L., Andreica, A., Boros, I., & Voiculescu, I. (2017). Avenues for the use of cellular automata in image segmentation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10199 LNCS, 282–296. https://doi.org/10.1007/978-3-319-55849-3_19.
[7] Popovici, A., & Popovici, D. (2002). Cellular automata in image processing. Fifteenth International Symposium on Mathematical Theory of Networks and Systems, 1, 1–6.
The detection of anomalies can be seen as detecting any data sample deviating from a given expectation. With the increase in coverage and density of acquisitions in SAR imagery brough tby the Sentinel-1 mission, we can better model vegetated environments temporally. Indeed,studies have shown that deforestation detection can be precisely performed using Sentinel-1 time-series [1, 2] as well as wildfires [3]. While these tools rely on some sort of supervision, either with the creation of handcrafted thresholds for deforestation detection or through the training of a supervised deep learning algorithm, unsupervised learning applied to vegetation time series has shown a high potential to discriminate and model the temporal profile of crop types and separate them in the autoencoder’s embedding space [4].
This automatic separation of SAR time series in groups with similar temporal behaviour can be related to the automatic extraction of fire outlines, acting as an anomaly in a relatively homogeneously forested environment. Thus, we build on this line of work by applying a convolutional autoencoder [4, 5] to a forested area of the Ontario Region, Canada. We use two years worth of Sentinel-1 σ0 time-series, between Jan. 2018 and Dec. 2019, consisting of a total of 71 acquisitions, for both VV and VH polarisation. The used deep learning model, illustrated in fig. 1, consists of two components; a convolutional encoder and a decoder:
• The convolution encoder uses convolutions to extract temporal features from the input time-series, that are then transformed by a stack of linear layers and projected onto an embedding space of low-dimension, here of dimension 3;
• The decoder consists of a stack of linear layers tasked with reconstructing the original time-series through a mean square error loss function computed between the input time-series and the output of the decoder.
With this bottlenecking strategy, the encoder part of the network is trained to extract the most discriminating features out of the input time-series to generate the embedding space. Thus, temporally contrasting areas will appear contrasted in the embedding space, but with a higher degree of explicitness, as the projection space dimension is much lower than the time-series one. This way, the observation of variations of various degrees within a dense SAR temporal stack is made possible, with said variations ranging from agricultural monitoring [4] to forest monitoring, as detailed in this work.
We observe in the generated 3-dimensional embedding space of fig. 1 the presence of a contrasting structure on the bottom-left of the image. When crossing this result with fire outlines provided by the Canadian Wildland Fire Information System (CWFIS), we conclude that the extracted pattern corresponds to a forest fire that happened during June 2018. The fire perimeter provided by the CWFIS is an estimation made using fire hotspots detected with multiple data sources,including thermal imagery. Thus, the observed differences in the fire outlines between the embedding image and the dataset can be partially explained by the estimation approximation performed to generate the product of the CWFIS. The apparent precision in the fire outline of the forest embedding image is encouraging for potential application to forest fire mapping or the extraction of other perturbations, such as forest clear-cuts.
[1] Vahid Akbari and Svein Solberg, “Clear-cut detection and mapping using sentinel-1 backscatter coefficient and short-term interferometric coherence time series,” IEEE Geo-science and Remote Sensing Letters, pp. 1–5, 2020.
[2] Johannes Reiche, Eliakim Hamunyela, Jan Verbesselt, Dirk Hoekman, and Martin Herold,“Improving near-real time deforestation monitoring in tropical dry forests by combining dense sentinel-1 time series with landsat and ALOS-2 PALSAR-2,” Remote Sensing of Environment, vol. 204, pp. 147–161, 2018.
[3] Yifang Ban, Puzhao Zhang, Andrea Nascetti, Alexandre Bevington, and Michael Wulder,“Near real-time wildfire progression monitoring with sentinel-1 SAR time series and deep learning,”Scientific Reports, vol. 10, 01 2020.
[4] Thomas Di Martino, R ́egis Guinvarc’h, Laetitia Thirion-Lefevre, and Elise Colin Koeniguer,“Beets or cotton? Blind extraction of fine agricultural classes using a convolutional autoencoder applied to temporal SAR signatures,”IEEE Transactions on Geoscience and Remote Sensing, pp. 1–18, 2021.
[5] Thomas Di Martino, Regis Guinvarc’h, Laetitia Thirion-Lefevre, and Elise Colin Koeniguer, “Convolutional autoencoder for unsupervised representation learning of PolSAR time-series,” in2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS,2021, pp. 3506–3509.
In recent years, due to the increased spatial and temporal resolutions of available acquisitions, the volume of earth observation data which is captured each day has increased dramatically. While this has the potential to be incredibly informative for earth observation tasks, there are many challenges with handling such a great volume of data. With deep learning methods becoming ever more the go-to method for handling data at these scales, there remain theoretical and methodological hurdles to overcome before these methods can fulfil their promised potential. For example, although often accurate when trained with sufficient well-labelled data, they often generalize poorly when only small amounts of training data are available [1]. This problem is compounded by the fact that, generally, these methods are not capable of quantifying whether or not a given prediction is supported by evidence which was observed in the training data. Even when a model might give the impression of generalizability, it could well be extrapolating beyond the support of the training data and into regions of the domain unknown. In this work, we present a method which is both semi-supervised and uncertainty-aware, making it ideally suited to problems where labelled data is limited. The model is trained to produce a second-order probability distribution which can be directly mapped to measures of uncertainty such as vacuity (arising from the lack of evidence) and dissonance (arising due to conflicting evidence). These uncertainties could in turn be used to identify the regions which need additional expert attention in a targeted fashion.
While other works have already presented uncertainty-aware methods for the classification of remote sensing data [2], these generally make use of the technique known as Monte Carlo (MC) dropout in order to simulate sampling from the posterior [3]. These samples are obtained by making use of dropout at inference time and are subsequently used to compute measures of aleatoric and epistemic uncertainty, referring to the lack of confidence arising due to random chance and due to systematic errors respectively. In addition to aleatoric and epistemic uncertainty, however, the method we present provides the user with quantifications of uncertainty from the theory of subjective logic: vacuity and dissonance. Vacuity measures the lack of confidence caused by a lack of evidence, whereas dissonance measures uncertainty due to the presence of conflicting evidence. This separation is arguably more human-interpretable than either aleatoric or epistemic uncertainty and they directly correspond to the task of anomaly and/or misclassification detection. In an operational scenario, the highlighting of data points considered as potential anomalies/misclassifications would provide the end user with an indication of the regions which might need hand classified. Alternatively, an active-learning scenario could be devised, whereby the model ‘asks’ for new labels to be provided in regions of high vacuity/dissonance, which are supplied in order to train a new model which predicts new quantities of vacuity and dissonance. This could form an iterative process that allows the domain expert, whose time is valuable/expensive, to provide a minimal viable labelled set.
The values of vacuity and dissonance uncertainty provided by this method are obtained via a second-order probability modelling. That is to say, the model predicts the parameterization of a Dirichlet distribution for each input: a probability distribution over probability distributions. Second-order probabilities play an important part in the decision-making process [4], and their use allows the subjective logic quantities of belief, evidence and uncertainty mass to be predicted and from these, a quantity of vacuity and dissonance computed. This probability distribution over probability distributions can be learnt from a partially labelled dataset, making it ideally suited for the task of remote sensing classification, for which labelled data is difficult to come by in any significant quantity and quality. In order to assess the quality of the uncertainties provided by our approach, we present an analysis based on two tasks: identification of data samples for which training observations have not been made, i.e. out-of-distribution (OOD) detection; and identification of model misclassifications. We compare the ability of the presented model to perform these tasks against methods which make use of MC dropout in order to provide evidence that our method obtains more meaningful measures of uncertainty.
The analysis is performed using two study datasets, in order to provide assurance of the method’s robustness. The first study uses the 2018 IEEE GRSS Data Fusion Challenge dataset. This dataset consists of ground truth labels for 20 mixed land cover (LC) and land use (LU) classes and multimodal input data from optical, hyperspectral and LiDAR sources. The second study is a damage assessment case study using data from the Port of Beirut explosion in August 2020. In this study, the labels refer to the damage levels of the buildings in the area of the event as assessed by response teams in the immediate aftermath of the explosion.
References
[1] Y. Sun, G. G. Yen, and Z. Yi, “Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations,” IEEE Trans. Evol. Comput., vol. 23, no. 1, pp. 89–103, Feb 2019.
[2] M. Kampffmeyer, A. B. Salberg, and R. Jenssen, “Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops, pp. 680–688, Dec 2016.
[3] Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning,” in Proc. 33rd Int. Conf. Mach. Learn., vol. 48, New York, NY, USA, 20–22 Jun. 2016, pp. 1050–1059.
[4] R. W. Goldsmith and N. E. Sahlin, “The Role of Second-Order Probabilities in Decision Making,” Advances in Psychology, vol. 14, no. C, pp. 455–467, Jan 1983.
The increasing contribution of the Antarctic Ice Sheet to sea level rise is linked to reductions in ice shelf buttressing, compounded by their thinning, weakening and fracturing. Ice shelf shear zones that are highly crevassed with open fractures are first signs that these shear zones have structurally weakened. The weakening of shear zones by this damage results in speedup, shearing and further weakening of the ice shelf, hence promoting additional damage development. This damage feedback potentially preconditions these ice shelves for disintegration and enhances grounding line retreat, and is considered key to the retreat of Pine Island Glacier and Thwaites Glacier as well as the collapse of Larsen B. Although damage feedbacks have been identified as key to future ice shelf stability, it is one of the least understood processes in marine ice sheet dynamics.
Quantifying damage in satellite imagery efficiently and accurately is a challenging task due to the highly complex surface of Antarctica, the variations in viewing-illumination geometry, snow or cloud cover and the variable signal-to-noise levels in different satellite imagery across the big data archive. As a result, efforts to detect damage from remote sensing are usually limited to regional studies or limited use of the archive (e.g. only few scenes) due to computational costs, or limited in spatial resolution, thus only identifying large rifts. First efforts to develop machine learning approaches that enable Antarctic wide damage detection have been made, however, these approaches rely on manually labelled fractures. On the ice, fractures not only range in from a few meters to a few kilometers, but heavily damaged areas also step away from the classical linear feature of a fracture. When creating manual labels, often a choice is made between fracture shapes and sizes that are (not) included, adding subjectivity to the dataset.
In this study, we apply an unsupervised deep learning approach, to create an automated and objective network that detects damage on an Antarctic wide scale, as well as on high resolution data. We develop and train this network initially on Sentinel-2 optical images composites (30m resolution) of the summer of 2019-2020, which then expanded to include Synthetic Aperture Radar (SAR) satellite images from the Sentinel-1 mission (10m resolution), to overcome limitations of each individual dataset. At first a case study of ice shelves in the Amundsen Bay area is developed, before scaling up to cover all Antarctic ice shelves. We use Variational Autoencoders to represent (un-)damaged pixels in the latent space of the network, and create a damage probability map for each pixel.
Results show the ability of the network to identify damage across the ice shelves, even though only having limited data as benchmark. The damage probability indicator shows a greater inclusion of damage features of all size ranges and fracture characteristics than can accurately be achieved by manual mapping. Current challenges on the large-scale application of the network are working with the strongly unbalanced ratio of damaged to undamaged pixels in the dataset, which are less pronounced for the case study in the Amundsen Bay area. For each individual data source limitations to detect damage are attributed to cloud cover (optical) and signal to noise ratio (SAR), underscoring the added value of combining both sets.
With this method, we can assess damage across the Antarctic and can identify weak ice shelves. Finally, by applying the network to a timeseries of satellite images, we can analyze the damage change across the past two decades, studying the development of ice shelf weakening.
While Deep Learning is nowadays a defacto standard in Earth Observation, the diversity of sensors and tasks often prevents us to rely on a public dataset designed with the right sensors and application. Furthermore, recent advances in UAV imagery strongly eases the generation of RGB data, that can bee seen as a fast and cheap way to capture EO data. However, annotations can be time consuming and expensive, especially for semantic segmentation.
Training a model in a supervised manner requires both data and annotations, for example one can have acquired RGB data and generate masks for them to detect crops, trails, and roads. However, RGB data alone might not be enough and often, such a limitation is only observed once a model has been trained on those data and subsequently evaluated. Our proposal aims to tackle such a EO data analysis scenario.
We aim to provide a simple framework where one can simply add data with a second modality and see if the resulting model can use this new information to better discriminate classes of interest. To do so, we use a Teacher-Student method called Self-Training. Self-Training is a pseudo-labelling algorithm where we use a first model called Teacher to produce annotations for a second model called Student, so that the Student can be trained on more data. However, performing it from RGB to RGB without noise addition, both in data and models, will lead to a confirmation bias algorithm. Instead, since here we add a modality, we provide our student more information so that it could be able to find another internal representation of data and be able to better discriminate the classes of interest.
We add a new temporary model called Contributor designed to perform modality translation. We can then produce a Pseudo-Modality that will be used in combination with the single modality annotated data. We end with a dataset that is made of a first subset where we have one real modality paired with a Pseudo-Modality with annotations, and a second subset where we have a pair of two modalities with Pseudo-Labels.
We experimented our method on the ISPRS Potsdam dataset by extending RGB with either NIR or DSM data. We show that our method is not only able to improve performances compared to the first model, but also that it can be more advantageous than annotating another training set. Furthermore, our method can be combined with other frameworks like Self-Supervision where our method takes place as the downstream task.
Research into biodiversity and climate change topics like desertification, deforestation, and urbanization require reliable land cover change detection techniques. Since validation data remains limited and expensive to obtain, multi-temporal unsupervised algorithms have long been the standard approach to land cover change detection. Over the past decade, many newly developed algorithms have emerged, although some only remained a proof of concept. Combining these unsupervised algorithms in a meta learner could be a way to improve detection accuracy. Moreover, a meta learner trained on labels generated from unsupervised algorithms could potentially be transferable, as its input algorithms can be run without the need for labeled land cover change reference data. As most multi-temporal change detection algorithms have only been tested in specific contexts, the question arises as to whether these methods can complement each other in meta-learning, as well as which algorithm is most accurate on a global scale, across all land cover types. Furthermore, a global comparison is interesting since many algorithms overestimate change, whereas land cover conversions are in fact rare events. Additionally, users of numerical models are found to prefer using algorithms they are familiar with, even if alternatives are more suited to the task (Addor & Melsen, 2019).
A planetary-scale comparison of land cover change detection algorithms is now possible since the Copernicus Global Land Service (CGLS) invested in an extensive labeled global reference dataset for the Land Cover 100m project, compiled by the International Institute for Applied Systems Analysis (IIASA) and Wageningen University. This labeled reference dataset consists of 33,881 locations sampled across all continents in 100m-by-100m tiles. For every sampling location, manually interpreted land cover labels of 10 classes are available for the reference years of 2015 to 2018. Out of all sampling locations, 2594 sites (7.7%) were identified as showing land cover change, including major conversion types like crop expansion, urbanization, and deforestation.
We selected three relatively new, open-source, easy-to-use, multi-temporal change detection algorithms available in R for a comparison on the global scale: Breaks For Additive Seasonal and Trend Lite (BFAST Lite) (Masiliūnas et.al., 2021), Detecting Breakpoints and Estimating Segments in Trend (DBEST) (Jamali et.al., 2015), and Facebook’s Prophet (Taylor & Letham, 2018). The three algorithms were applied to detect land cover changes in NDVI time-series data from Landsat 8 (2014-2020). For each location, the change detection output per algorithm was compared against the reference data, resulting in F1 scores, precision, recall, and sensitivity values per algorithm. Additionally, all algorithms were compared in terms of performance across land cover transition types using the same metrics.
To examine the relative importance of the three land cover change detection algorithms, we construct a random forest (RF) as a meta-learner using the change-point magnitudes as input labels (0 for no-change, and a value between 0 and 1 for change). The RF is applied to the labeled reference data of 2015-2018. Although the RF is not a fully unbiased meta-learner, as the reference data is not a fully representative, spatially uncorrelated sample of all land cover transition types, it is an important step towards a global land cover change detection transfer learner.
This research compares the performance of three novel land cover change detection algorithms. The outcomes are relevant to research into meta- and transfer learning, as well as to users to understand which land cover change detection algorithm to choose for their task, contributing to more accurate land cover change detection which improves global land cover information, and integrated monitoring campaigns.
Addor, N., & Melsen, L. A. (2019). Legacy, rather than adequacy, drives the selection of hydrological models. Water resources research, 55(1), 378-390.
Buchhorn, M., Bertels, L., Smets, B., Roo, B.D., Lesiv, M., Tsendbazar, N.E., Masiliūnas, D., Li, L. (2021) Copernicus Global Land Service: Land Cover 100 m: Version 3 Globe 2015–2019: Algorithm Theoretical Basis Document, Zenodo: Genève, Switzerland,
Jamali, S., Jönsson, P., Eklundh, L., Ardö, J., & Seaquist, J. (2015). Detecting changes in vegetation trends using time series segmentation. Remote Sensing of Environment, 156, 182-195.
Masiliūnas, D., Tsendbazar, N. E., Herold, M., & Verbesselt, J. (2021). BFAST Lite: A Lightweight Break Detection Method for Time Series Analysis. Remote Sensing, 13(16), 3308.
Taylor, S. J., & Letham, B. (2018). Forecasting at scale. The American Statistician, 72(1), 37-45.
The increase of remote sensing satellite imagery with high spatial and temporal resolutions requires new procedures able to manage the high volume of data stored and transmitted to the ground. Advanced techniques of on-board data processing can answer to this problem, offering the possibility to select only the data of interest for a specific application, or to extract specific information from data [1]. In change detection applications, for example, only images containing changes are worth being stored and sent to the ground. However, the computational resources existing on-board are limited compared to the ground segment availability. Therefore, models which are characterized at the same time by light architectures and adequate performance in terms of accuracy have to be considered.
Auto-Associative Neural Networks (AANNs) are multi-layer perceptron networks composed of multiple layers of units having nonlinear activation functions. Unlike a standard topology, an AANN uses three hidden layers including a middle bottleneck layer of smaller dimension than either input or output, where the number of nodes coincides. Therefore, the network has a symmetrical structure that can be viewed as two successive functional mappings: the first half of the network maps the input vector to a lower dimensional subspace via the encoder module while the second decodes it back via a decoder module. The targets used for training are simply the input vectors themselves so that the network is attempting to map each input vector into itself. Label data are then generated from input data, making the AANN an effective instance of the self-supervised learning approach where ground information or labelled output is not required [2], [3].
This study presents a change detection method which relies on a feature-based representation of the Sentinel-2 images obtained by means of an AANN. For their characteristics, AANNs have already been used to perform dimensionality reduction and to extract relevant feature from remote sensing images. Indeed, the bottleneck layer forces a compressed knowledge representation of the original input. The number of neurons to be considered in particular in the bottleneck layer is a critical choice: a small number of units means a significant dimensionality reduction of the input data but if the number is too small the input-output associative capabilities of the net are too weak [4].
In this work, Sentinel-2 images were used to build the dataset. They were divided into sub-images, i.e. patches, and input to the AANN where, by means of the bottleneck layer, they were reduced by 32 times in terms of pixels. To validate the method, the OSCD (Onera Satellite Change Detection) dataset based on multi-temporal pairs of Sentinel2 images was used [5]. During the testing phase, the trained AANN was used to extract from a multi-temporal couple of image patches the corresponding couple of smaller representative vectors. Then, the dissimilarity between the two vectors was correlated to the occurred changes by measuring the Euclidean distance between them. Another technique is also considered in this study, which plays the role of a benchmark for the performance analysis of the presented methodology. In fact, the results obtained with the AANN compression have been compared with those obtained with a Discrete Wavelet Transform (DWT) compression. The latter is indeed widely recognized very effective for lossy image compression [6]. The results demonstrated that for the change detection purposes, the image encoding by means of AANNs led to better achievements than DWT. In particular, by means of the considered ground truth, we obtained a F1 score of 97% with AANN compression to be compared with a F1 score of 86% using DWT compression.
[1] B. Qi, H. Shi, Y. Zhuang, H. Chen and L. Chen, “On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery,” Sensors, vol. 18, no. 5, 2018.
[2] L. Zhang, L. Zhang and B. Du, “Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art,” IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 22-40, June 2016.
[3] P. Baldi, “Autoencoders, Unsupervised Learning, and Deep Architectures,” Unsupervised and Transfer Learning Challenges in Machine Learning, vol. 7, 2012.
[4] G. Licciardi and F. Del Frate, “Pixel unmixing in hyperspectral data by means of neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 11, pp. 4163-4172, November 2011.
[5] R. Daudt, B. Le Saux, A. Boulch and Y. Gousseau, “Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2018.
[6] M. M. H. Chowdhury and A. Khatun, “Image Compression Using Discrete Wavelet Transform,” IJCSI International Journal of Computer Science Issues, vol. 9, no. 1, 2012.
Professional sea ice analysts who make sea ice charts do so by interpreting microwave Synthetic Aperture Radar (SAR) textures and drawing polygons in a creative process with an associated set of rules. This is similar to the art of painting a portrait or a fixed object, constrained by creativity and a task without a singular correct solution. Therefore, a trivial deep learning approach that simply maps input to the output may not be a suitable technique for automating human endeavours. However, the contemporary developments of generative neural networks could be a potential avenue for automating this process in a creative way. Furthermore, the existing state-of-the-art automated sea ice charting methods rely either on manual labelling of satellite images or, more recently, segmentation convolutional neural networks, such as the U-Net model. The main drawback of the latter is that it still relies on handcrafted labels (i.e. sea ice charts), which may not be created with the same standard among different national ice services or have a low level of detail, such as areas less accessible to maritime operations (i.e. areas of fully covered sea ice). Creating an AI solution that can perform the charting task in an unsupervised or semi-supervised way (using only partially annotated datasets) could offer cherished assistance, either in manual assistance of the satellite data or even freeing up human resources; in either case, it could cut the time required for labelling the data from hours to seconds.
We develop a data pipeline for semantic segmentation of sea ice using the U-Net model adapted to incorporate the dualistic nature of the CycleGAN input. We modified the original CycleGAN (which uses RGB images) to the satellite imagery and created a High-Performance-Computing environment to facilitate training of a four-network ensemble (2 critics, 2 generators). One generator takes satellite images as its input and tries to predict the corresponding sea ice segmentation maps, while the other takes the sea ice charts and tries to reconstruct the satellite input that could have produced it. Meanwhile, the two respective critics discriminate between satellite images or ice charts, trying to decide which ones are real and which ones were synthetically produced. A proportion of the satellite data may be associated with corresponding charts, and this data is then used as another optimization objective for the network ensemble, transforming the unsupervised problem into a semi-supervised one.
In summary, the objective function of the models’ training has six components in the unsupervised setting, measuring the following qualities:
* How well can the generator model taking SAR images as the input match the distribution of its output with human-made sea ice charts (generator critic sea ice chart loss),
* How well can the generator model taking sea ice charts as the input match the distribution of its output with satellite acquired SAR images (generator critic SAR loss),
* How well can one of the critic models distinguish between the synthetic and real SAR images (critic SAR loss)
* How well can the other critic model distinguish between the machine-made and human-made sea ice charts (critic sea chart loss),
* How well can the generator taking SAR images as its input and outputting sea ice charts reconstruct the sea ice chart, if its SAR input was a synthetic output of the other generator taking this sea ice chart as its input (sea ice chart cycle loss),
* How well can the generator taking sea ice charts as its input and outputting SAR images reconstruct the SAR, if its sea ice chart input was a synthetic output of the other generator taking this SAR image as its input (SAR cycle loss).
In the semi-supervised variant, we also add two objectives where we judge whether the synthetic data matches the associated expected labels (which are either the sea ice charts or the satellite images). All these loss function components have to be balanced appropriately, in order to create a stable training framework.
The impact of this project falls in the broader category of using AI for sea ice segmentation. Generally, it can be summarized as greatly speeding up the process of producing charts, while reducing the required human labour. Consequently, it can help in allowing for Arctic trading routes. Moreover, this work can shift the paradigm of human annotation from categorically absolute correctness to a creative measure of belief. Sea ice charting is one of the few examples of which absolute truth labelling is insufficient in reflecting the complex reality of human intuition. However, our approach offers a possibility of capturing this intuition within an AI model. Furthermore, the progression from the supervised to the semi-supervised approach offers the following benefits: less annotated data is required to train the models, it is less prone to human labelling errors, it can bridge the gap between incompatibly labelled Pan-Arctic institutional sea ice charts.
Over the last decades, remote sensing data became an abundant resource for earth observation in many different domains with a vast socio-economic impact. Different platforms, like Sentinel Hub, USGS, Creodias, Urban TEP, and many others exuberantly provide these data for different needs with selectable spatio-temporal coverage and quality levels. Furthermore, the recent advent of machine learning, especially in the field of deep learning, resulted in a myriad of different applications solving different interdisciplinary problems. However, the combination of pervasive remote sensing data with deep neural networks is limited by the availability of labeled data that is mandatory for supervised learning.
We present a novel method that is automatized and configurable through exposed parameters to enable training with synthetic but noisy labels in a supervised approach. It avoids the need for expensive manual labeling of training data and fits well into the stochastic learning process. It is designed to support large-scale scenarios with minimal temporal and spatial restrictions by using already level-1 data. To reduce dependence on a single labeling method, we combine two different data types, that is synthetic aperture radar (SAR) and optical multispectral remote sensing data, and a combination of two different state-of-the-art change detection methods for increased control and quality. These change detection methods are based on the Omnibus test statistic from Conradsen et al. [1] and our modification of ENDISI from Chen et
al. [2]. The use of both change detection methods is flexible and can be applied to different types of changes, like urban changes, deforestation, changes in water topologies, or earth slides, as long as a suitable index method for multispectral optical data exists and changes are noticible in the SAR backscatter. What is more, our method works with observation windows as individual (training) data samples, which contain a large number of observations, hence referenced as deep-temporal, and can be irregular. E.g., two one-year windows can have a different number of observations depending on how much usable level-1 data is available at the respective observation period.
We demonstrate our method to monitor urban changes [3] by applying it to urban change monitoring, which helps to identify new settlements, or analyzing trends of urban sprawl over decades. Two different models are trained using mission pairs of ERS-1/2 and Landsat 5 TM, and Sentinel 1 and 2, ranging from 1991-2011 and 2017 up to today, respectively. Considered observation data is of level-1 for a
maximum spatio-temporal resolution and is provided by services USGS, ESA, and Senintelhub. All observations are co-registered and available cloud masks were used to remove a majority, yet neither fully nor correctly, of atmospheric obstructions. Observations that fail co-registration are automatically discarded. All observations are windowed into 1-year (ERS-1/2 and Landsat 5 TM) or 6 months (Sentinel 1 and 2) periods containing a varying amount of observations with a max. of 110 and 92 observations, respectively. A novel architecture with an
ensemble of fully convolutional deep neural networks is trained for this task. The deep-temporal and multiple data type observation windows are used to train each ensemble part of the network, which can also be applied partially to just one data type if needed. For training, two areas of interest (AoI), which are the cities Rotterdam (Nederlands) and Limassol (Cypress), are used with a third AoI containing the city of Liege (Belgium) for validation purposes. To train with such large data sets, data-parallel deep learning with Horovod and multiple NVIDIA Tesla A100 GPUs is applied. Our solution is available on Github [4] which includes data preparation, training, and inference scripts, as well as the the neural network architecture and trained models for direct use.
[1]: https://doi.org/10.1109/TGRS.2015.2510160
[2]: https://doi.org/10.1117/1.JRS.13.016502
[3]: https://www.mdpi.com/2072-4292/13/15/3000
[4]: https://github.com/It4innovations/ERCNN-DRS_urban_change_monitoring
In the past decade deep learning has achieved great success in computer vision due to the extensive amount of visual data becoming available. Such achievements demonstrate the necessity and importance of collecting large scale datasets in order to train those data-hungry neural networks [1]. One of the main challenges is that most of the state-of-the-art methods rely on labeled data, while labeling procedures for these real-world applications usually require highly skilled expertise, rendering them costly and impractical. To overcome such challenges, practitioners are turning their attention to transfer learning and self-supervised learning. Transfer learning focuses on adapting neural networks pretrained on a large dataset of the source domain to a much smaller dataset of the target domain such that the target data can also benefit from the general knowledge learnt from the source dataset. On the other hand, self-supervised learning tries to discover the underlying patterns without the guidance of labels such that it could produce a general representation of the data. Recent advancements [2, 3] show that combining these two approaches might be a powerful solution to the aforementioned challenges.
Followed by their success in computer vision, deep neural networks are also increasingly favored by the remote sensing community [4]. Thanks to open access satellites such as Sentinel 2, it has become possible to acquire significantly more Earth Observation (EO) images than it was ever before [5], accelerating the research of applying deep learning-based approaches to remote sensing domain. Although there are already some works such as [6, 7] targeting developing deep learning-based methods for remote sensing tasks such as change detection and land cover classification, they are primarily focusing on directly training networks on small target datasets in a fully supervised manner. On the other hand, considering that most of these tasks normally require pixel level outputs, it could be extremely costly to build a large EO dataset for training neural networks with label supervision. Thus, combining transfer learning and self-supervised learning becomes a reasonable solution to achieve better performance. However, this brings its own challenges, one of which could be image resolution. Since pixels of EO data need to represent meaningful objects with enough details, each image can have a very high resolution with a reasonable global context for training, which prevents them from fitting the GPU memories or leads to impractical training time.
In this work, we explore the opportunity of applying the self-supervised transfer learning to address the change detection in remote sensing. More specifically, we tackle this challenge by using different resolutions for self-supervised pretraining and supervised fine-tuning, respectively. That is, we resize the large-scale source images into a smaller resolution for self-supervised pretraining such that they can be fitted for affordable GPU hardware. Then with the pretrained network we fine-tune it on the small target dataset with high resolution to make sure the pixel-wise output contains enough details. To achieve this, we propose two improvements over the commonly used approach [2]. First, we use Visual Transformers (ViTs) as our neural network backbones instead of conventional Convolutional Neural Nets (CNNs) to avoid the disagreement between pretraining and fine-tuning input resolutions, since ViTs are only based on the fixed patch size rather than the image size. Second, we impose a consistency loss between representations of low- and high-resolution images to make sure they are representing the same global contents. In our experiments, we aim to evaluate the proposed method via the building damage assessment task. We will pretrain the ViT on the large-scale Functional Map of the World (fMoW) [8] dataset using the state-of-the-art self-supervised learning, then fine-tune it on xBD [9] dataset. We expect that both proposed improvements can lead to higher performance on the target dataset.
References:
[1] A Krizhevsky, et al. “ImageNet Classification with Deep Convolutional Neural Networks.” NeurIPS 2012.
[2] X Chen, et al. “Improved Baselines with Momentum Contrastive Learning.” arXiv:2003.04297
[3] M Caron, et al. “Emerging Properties in Self-Supervised Vision Transformers.” ICCV 2021.
[4] T Hoeser, et al. “Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends.” Remote Sensing, 2020.
[5] D Phiri, et al. “Sentinel-2 Data for Land Cover/Use Mapping: A Review”. Remote Sensing, 2020.
[6] A Di Pilato, et al. “Deep Learning Approaches to Earth Observation Change Detection.” Remote Sensing, 2021.
[7] N Kussul, et al. “Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data.” IEEE Geoscience and Remote Sensing Letters, 2017
[8] G Christie, et al. “Functional Map of the World.” CVPR 2018
[9] R Gupta, et al. “Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery.” CVPR workshops 2019
Summary: We identify a potential problem in a popular adaptation of self-supervised learning to remote sensing. The automatically generated positive contrastive pairs could be contaminated, due to the unsupervised nature of the data. We investigate whether this has a practically relevant impact on downstream task performance.
With the advances in satellite technology, more and more unlabeled satellite images are becoming available. Methods of machine learning, specifically deep neural networks, have proven very adept at solving computer vision tasks in other domains. However, they typically require large numbers of expert-labeled training images to work well, which can take large amounts of time and money to acquire. A solution is provided by methods of self-supervised learning, which can make use of the vast amounts of unlabeled satellite images that are accessible nowadays.
The most successful methods in self-supervision use forms of contrastive learning [1,2]. The model is trained to embed the information contained in an image into a low-dimensional vector space. It is then trained to output similar embeddings for image pairs which the user has defined as similar (positive pairs), and dissimilar embeddings for those defined as dissimilar (negative pairs). The key is that this distinction between positive and negative pairs can be made automatically, without requiring manual labeling, thereby allowing these methods to use completely unlabeled data. On standard computer vision datasets, positive pairs consist of two augmented versions of the same input image. For this, augmentations have to be hand-picked, which imitate real-world variation between images. However, in remote sensing, we are able to make use of auxiliary data to create positive image pairs which show naturally occurring real-world variation between the two images. [3] and [4] both introduce methods that define positive pairs as pairs of images that (partially) show the same location, but have been taken at different points in time. These positive image pairs are referred to as temporal positive pairs (TPs) in [3]. The expected result of this approach is that the model will learn to ignore changes that often occur between two images taken at different points in time, like low amounts of cloud coverage, cloud shadows, seasonal changes in vegetation or image acquisition artifacts.
The research question that we propose now hinges on the following. The success of the temporal positives method relies on the fact that changes between TPs are mostly irrelevant to the downstream task. If, for example, a lot of images from before a wildfire are paired with images from the same locations after a wildfire, the network would learn to ignore wildfire-induced changes. This would likely negatively impact the performance on a fire detection downstream task. For supervised learning, it would be no problem to exclude such image pairs from the training dataset. However, in self-supervised learning, we assume that we have unlabeled data, so we can not immediately filter out such image pairs and have to assume that they might be present in our dataset. The research question we propose is to investigate how big the influence of false temporal positives (FTPs) on downstream task performance is. We investigate this by first training models m1, m2, ... on datasets d1, d2, ..., which have varying proportions p1, p2, ... of FTPs. Then, we use the representations learned by the models and train a simple classifier on the downstream task. We expect that models with higher proportions of FTPs will have a lower performance on the downstream task, since they are encouraged during training to ignore features important to the downstream task.
A preliminary study on the CIFAR10 [5] dataset showed that injecting false positive pairs in self-supervised training deteriorates the downstream task performance, with more false positive pairs leading to worse performance. Preliminary experiments on a simple cloud detection task [6] show that increasing the proportion of FTPs from 10% to 25% leads to a 7% drop in segmentation performance. This demonstrates that FTPs can have a negative impact on downstream performance. The results are currently limited by the small size of the dataset, as well as the label quality. Further experiments on larger datasets and different tasks are being conducted to both ascertain whether the effect occurs for practically relevant FTP proportions and how it varies across tasks. The results will be presented at the Living Planet symposium.
[1] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. “A simple framework for contrastive learning of visual representations”. International conference on machine learning. PMLR, 2020, pp. 1597–1607.
[2] K. He, H. Fan, Y. Wu, S. Xie, and R. B. Girshick. “Momentum Contrast for Unsupervised Visual Representation Learning”. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 9726–9735.
[3] K. Ayush, B. Uzkent, C. Meng, K. Tanmay, M. Burke, D. Lobell, and S. Ermon. “Geography-Aware Self-Supervised Learning”. arXiv:2011.09980 [cs] (Dec. 2020). arXiv: 2011.09980.
[4] M. Leenstra, D. Marcos, F. Bovolo, and D. Tuia. “Self-supervised Pretraining Enhances Change Detection in Sentinel-2 Imagery”. Vol. 12667 LNCS. ISSN: 0302-9743 Meeting Name: 25th International Conference on Pattern Recognition Workshops, ICPR 2020. 2021, pp. 578–590. doi: 10.1007/978-3-030-68787-8_42.
[5] A. Krizhevsky. “Learning Multiple Layers of Features from Tiny Images” (2009), pp. 32–33. URL: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (Last accessed: 09.12.2021)
[6] S. Ji, P. Dai, M. Lu, and Y. Zhang. “Simultaneous Cloud Detection and Removal From Bitemporal Remote Sensing Images Using Cascade Convolutional Neural Networks”. IEEE Transactions on Geoscience and Remote Sensing 59.1 (Jan. 2021). pp. 732–748. doi: 10.1109/TGRS.2020.2994349.
In the last decade, deep learning has led to important developments in the remote sensing community with the main success in supervised learning. However, after years of fast improvements on various benchmarks, the supervised learning paradigm is showing its limits. Indeed, training accurate and robust deep neural networks requires significant amounts of labeled data, which in turn strongly impedes the applicability of deep learning in real-world scenarios because of the prohibitive cost of data annotation.
Recently, self-supervised learning (SSL) has shown great success in various computer vision tasks and raised wide attention in the remote sensing community. With the advantage to learn representations from large-scale unlabeled data without human annotation, SSL is an especially promising methodology in the field of remote sensing and earth observation. Indeed, the increasing availability of open access data in earth observation provides a unique opportunity to leverage SSL on many problems. However, despite the big success in natural images, most of the potential of SSL in remote sensing imagery remains locked.
In this work, we provide an empirical study on the performance of self-supervised learning for space-borne imagery. Specifically, we conduct extensive experiments on three well-known remote sensing datasets BigEarthNet, SEN12MS and LCZ42 using four representative state-of-the-art SSL algorithms MoCo, SwAV, SimSiam and Barlow Twins. We analyze the performance of SSL algorithms under different data regimes and compare them to vanilla supervised learning. In addition, we explore the impact of data augmentation, which is known to be a key component in the design and tuning of modern SSL methods.
Experimental results confirm the great potential of self-supervised learning in remote sensing. Indeed, we show it is possible to pre-train deep neural networks without any labels and learn representations that are highly effective for classification tasks. This result consistently holds for all the algorithms and datasets that have been tested. Moreover, we highlight the benefits of SSL under the regime of limited labels: when only 1% of the labels are available, SSL performs much better than supervised learning. In addition, our analysis of data augmentation shows the importance of the frequently used cropping operation, but questions the adequacy of other common transformations that have been shown to have a critical impact on natural images. This confirms the necessity of devoting more efforts into the design of augmentation strategies that are tailored for earth observation data, as well as SSL algorithms that go beyond hand-crafted transformations.
In the last decades, hyperspectral imaging has gained a lot of attention in the remote sensing community with its ability to provide very detailed spectral information about objects. Moreover, thanks to development of lower-cost sensors, many open-access datasets have been generated and made available to researchers. This in turn has driven significant methodological developments for hyperspectral image processing and analysis. In particular, deep learning, with its promise to extract highly non-linear relationships from data and learn task-driven representations, has dominated the field in recent years. It is now the standard approach for many problems such as hyperspectral image classification.
An important limitation of deep learning, particularly the supervised learning paradigm, is the need for large labeled datasets to train robust and accurate models. This obstacle is even more exacerbated for hyperspectral image classification, where annotating thousands of pixels for every captured scene is highly impractical. Fortunately, many approaches to mitigate this label efficiency issue have been developed in the machine learning and remote sensing literature. In particular, Self Supervised Learning (SSL) has recently shown very promising results in computer vision with its ability to learn useful representations from unlabeled data.
In this paper, we explore the use of self supervised learning for hyperspectral image classification under a limited label setting. The rationale behind such an approach is to exploit the unlabeled pixels in a scene to learn useful features that can generalize well for classification with few labels. In particular, we investigate the use of three state-of-the-art self supervised learning algorithms from the computer vision literature, namely Moco-v2, Swav, Barlow Twins, and evaluate them on two well-known hyperspectral image classification datasets: Pavia University and Houston University. We put a focus on a few-shot learning scenario, where only a small number of labeled samples (typically K between 5 and 10) per class are available. We show that this approach is applicable for both 1D and 2D convolutional neural networks. We also investigate through extensive experiments the impact of data augmentation in the pre-training phase.
The results we obtain demonstrate that self supervised learning is a promising approach for hyperspectral image classification. Indeed, by leveraging the unlabeled pixels in the pre-training phase and a simple linear/fine-tuning protocol for classification, we outperform supervised learning baselines by a significant margin. The superiority of this SSL based pipeline over supervised learning is consistent across methods, models, datasets and sizes of the training sets. Moreover, these results are obtained with minimal pre-processing of the data and no band selection or dimensionality reduction techniques are applied. Additionally, we show that the choice of the data augmentation strategy and its strength has a significant impact on performance. Thus, more efforts should be devoted into the design of good transformations for hyperspectral data.
Recent decades have shown a rapid increase in urbanization accompanied by the fast growth of the human population as well as the populated surface area of the Earth [1]. According to the statistics provided by the United Nations (UN), the proportion of the world’s population living in the urbanized areas increased from 30 percent in 1950 to 55 percent in 2018, and it is projected that 2 out of 3 people will live in urban areas in 2050 [2]. Since such rapid urbanization can have drastic effects, quantifying it is increasingly important as reflected in the Sustainable Development Goal (SDG) 11 (sustainable cities and communities) proposed by the UN [3]. However, there are still over 100 low- and middle-income countries that lack the abilities to develop Civil Registration and Vital Statistics (CRVS) systems, which are crucial for urban planning, environmental monitoring, and disaster emergency response [4]. Therefore, satellite-based earth observation techniques have attracted considerable attention from the research community and it has played a significant role in mapping urban areas on a large scale. Several approaches have been developed to systematically collect temporal and spatial human settlement extent (HSE) information from the earth observation side [1]. Several remote-sensing-based works have shown the potential of deriving urban footprints directly from satellite images, the methods varying from feeding hand-crafted features into machine learning models such as random forests to proposing novel end-to-end convolutional neural network architectures such as Sen2HSE-Net [5,6].
Although AI methods have achieved great success in the problem of HSE information extraction, the datasets they employed are mainly focused on spatial and bi-temporal information, which do not make the best of the multi-temporal nature of satellite imagery [6,7,8]. To advance this field and enable new methods in this domain, Van Etten et al. hosted Multi-Temporal Urban Development Challenge at the 2020 NeurIPS conference and released a large dataset called Multi-Temporal Urban Development SpaceNet (MUDS, also known as SpaceNet 7) dataset [9,10]. SpaceNet 7 dataset contains about two dozen satellite images (once a month) per 101 rapidly urbanizing areas of interest (AOIs). Over three hundred challenge participants were asked to identify the new building constructions and track the existing buildings through September and October last year [9,10]. The main difficulty in that challenge is that the SpaceNet 7 dataset contains many small buildings with high density, which is challenging for models to track and extract. Besides, it is common that satellite imagery sequences contain certain irrelevant changes such as seasonal change and cloud obstructions. To solve these problems, the winners of the challenge adopted two strategies - they first preprocess the satellite imagery at a 3x scale and apply HRNet to maintain the high resolution throughout the whole network, then propose a new post-processing module called Temporal Collapse to simulate the constant function and the step function for the cases of tracked building/non-building and new construction respectively, using the average probability maps of building segmentation as the references [10]. As a result, these two strategies achieve great success in the evaluation metrics, which improve the change score from 0.06 (baseline) to 0.20 and the SCOT metric from 0.17 (baseline) to 0.41 [10].
After reproduction and careful analysis of the winners’ solution, we identified two key aspects of the method which could be improved: (i) finding the optimal thresholds is challenging and time-consuming and (ii) the building probability maps sequence may be too oscillating. Thus, in this work, we proposed a new method called temporal consistency regularization to improve the learned representation in HRNet. We add a branch to feed the temporal batch from the same AOIs to HRNet and get the corresponding feature representations inside HRNet. Moreover, we propose a novel consistency loss L = (h_1 - h_2)^2 * (1 - 2|y_1 - y_2|) (where h_1, h_2 are feature representations from two branches and y_1, y_2 are ground truth of building segmentation) to make the representation not change drastically from one frame to another, except for the pixels belonging to new or collapsed constructions. Although consistency regularization has shown state-of-the-art performance in many computer vision tasks such as image classification [11] and image-to-image translation [12], as far as we know, our work is the first one to apply this concept into the multi-temporal change detection task in remote sensing. The qualitative results of the average probability maps from one AOI show that our method could achieve better representations compared to the winners’ solution. To rigorously investigate the proposed regularization, we plan to devise novel evaluation metrics for change detection and thereby show the qualitative and quantitative benefits of our approach. The complete results of our method would be presented in the Living Planet Symposium.
References:
[1] Gamba, Paolo, and Martin Herold, eds. Global mapping of human settlement: experiences, datasets, and prospects. CRC Press, 2009.
[2] The speed of urbanization around the world. https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/files/documents/2020/Jan/un_2018_factsheet1.pdf.
[3] Popkin, Gabriel. "Technology and satellite companies open up a world of data." Nature 557.7706 (2018): 745-748.
[4] Mills, S.: Civil registration and vital statistics: key to better data on maternal mortality. Nov
2015
[5] Patel, Nirav N., et al. "Multitemporal settlement and population mapping from Landsat using Google Earth Engine." International Journal of Applied Earth Observation and Geoinformation 35 (2015): 199-208.
[6] Qiu, Chunping, et al. "A framework for large-scale mapping of human settlement extent from Sentinel-2 images via fully convolutional neural networks." ISPRS Journal of Photogrammetry and Remote Sensing 163 (2020): 152-170.
[7] Chen, Hao, Zipeng Qi, and Zhenwei Shi. "Efficient Transformer based Method for Remote Sensing Image Change Detection." arXiv preprint arXiv:2103.00208 (2021).
[8] Hou, Bin, et al. "From W-Net to CDGAN: Bitemporal change detection via deep learning techniques." IEEE Transactions on Geoscience and Remote Sensing 58.3 (2019): 1790-1802.
[9] Van Etten, Adam, et al. "The Multi-Temporal Urban Development SpaceNet Dataset." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[10] Van Etten, Adam, and Daniel Hogan. "The SpaceNet Multi-Temporal Urban Development Challenge." arXiv preprint arXiv:2102.11958 (2021).
[11] Laine, Samuli, and Timo Aila. "Temporal ensembling for semi-supervised learning." arXiv preprint arXiv:1610.02242 (2016).
[12] Mustafa, Aamir, and Rafał K. Mantiuk. "Transformation Consistency Regularization–A Semi-supervised Paradigm for Image-to-Image Translation." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16. Springer International Publishing, 2020.
The Horizon 2020 European Union-funded SURPRISE project aims at demonstrating an instrumental concept based on the use of a spatial light modulator to implement a super-resolved, compressive demonstrator of a single-pixel camera instrument. Such instrument is aimed at Earth Observation EO) in the visible and medium infrared spectral regions from geostationary platform, and features enhanced performance in terms of at-ground spatial resolution, on-board data processing and encryption functionalities.
At-ground spatial resolution of such instruments can in fact be enhanced, without increasing the number of pixels of the detector, by adopting a super-resolution approach. Increasing the number of detector’s pixels can be very expensive, especially in spectral ranges like medium infrared. The use of a high number of pixels yields a more complex optical design. It requires to use either very large Focal Plane Arrays (FPAs), which are noisy and make the design of the optics difficult, or pixels of smaller dimensions, which raises diffraction-related issues, particularly in the infrared. A consequence of these technical issues is that, at present, there is a gap in the production of high spatial resolution - yet with frequent revisit time - infrared data, for both scientific applications and operational services.
The SURPRISE concept benefits from the use of a spatial light modulator that acts as a coding mask to modulate the image produced by the collection optics. The signal transmitted is then focused by an optical condenser on the detector. Image reconstruction is performed at the ground segment. Besides increased number of pixels in the reconstructed image, compressive sensing enables fast on-board processing of acquired data for information extraction as well as native data encryption on top of native compression. Data link requirements of the payload should also benefit from such approach, as data acquisition and compression steps are merged into a single measurement step. Using minimal on-sensor processing, compressive sensing can provide a compressed data stream natively, without requiring an additional compression board. Thus, it has a twofold advantage of reducing bandwidth requirements and simplifying on-board processing unit design.
Based on the instrument concept and its expected performance, this presentation aims at describing its potential for EO applications.
Spaceborne scatterometers are microwave radars capable of determining the scattering properties of the Earth surface by providing measurements of the normalized radar backscatter coefficient σ0 in a wide span of off-nadir look angles. These measurements are key in a variety of areas in geoscience, such as studies on climate, weather forecasting and storm monitoring, through the indirect calculation of vector winds over the ocean, and studies on sea ice or soil moisture. Examples of spaceborn scatterometers include the European Advanced Scatterometer (ASCAT) sensor on board of Metop series satellites and SCA, which is planned to be the enhanced version of ASCAT. Real data and simulations have demonstrated a number of performance advances of these instruments with respect to previous missions, such as increased near-global coverage achieved by the use of twin swaths. One of the main products provided by these scatterometers corresponds to two-dimensional images of σ0 gridded with regular pixel spacings of 25, 12.5 or 6.5 km, and spatial resolutions of the order of 15 to 50 km. A relatively simple re-sampling method through spatial averaging is used to form these images, which allows reducing the signal standard noise and the computational load. However, two problems arise: on one hand, the obtained resolutions may not be sufficient for certain geophysical applications; on the other hand, pulses from each antenna are transmitted to the ground on an along-track spacing smaller than the spatial resolution, giving the effect of blurring point targets. In order to improve the quality of the images, a deconvolution problem may be formulated to estimate the true σ0 field from the noisy, irregularly sampled σ0 scatterometer measurements provided in the full resolution product. This product contains measurements along every antenna beam that represent footprints determined by the spatial response functions. In this work, inversion methods for deconvolution are explored for the latest series of European scatterometers. The inverse problem is assumed to be linear and a noise model is included. A number of regularization strategies are analyzed, such as the built-in regularization of the scatterometer image restoration (SIR) method, or the use of priors to obtain maximum a posteriori estimates within a Bayesian framework. Results will provide recommendations of algorithms that have potential for quality improvement of ASCAT and SCA operational data.
Supersent combines Earth Observation (EO) and Artificial Intelligence (AI) and increases the level of spatial detail of Sentinel-2 imagery to 2m and improves mapping and monitoring capabilities. Supersent is an innovation project, co-funded by the German Federal Ministry of Economics and Technology, managed and performed by EOMAP.
We use a Generative Adversarial Network (GAN), a system of two neural networks, namely a Generator and a Discriminator, which are competing with each other. The generator network is provided with low resolution images and tries to increase the spatial resolution. These super-resolution, spatially sharpened images will be presented to the discriminator network together with truely very-high resolution imagery images. The discriminator distinguishes these two categories of imagery and the feds back to the generator. The generator uses this feedback to further refine the GAN model and increase its capability to sharpen images. This whole feedback loop is done many times until the discriminator cannot distinguish the true very high resolution images from the super-resolution sharpened images.
As training data to the AI model, we used digital aerial imagery in native resolution of 40cm. Those data are available in many countries and represent an ideal dataset for training and validation routines. These digital aerial images were resampled to 2m and 10m – Sentinel-2 – spatial resolution. The GAN derives the best model fit to predict the 2m spatial resolution images based on the 10m spatial resolution imagery of the same imagery. This trained model and workflow was used on the Sentinel-2 imagery and resulted in a super-resolution Sentinel-2 imagery of 2m spatial resolution, which represents 25 times higher information content. We validated the outcome with manual checks on independent digital aerial imagery images and by means of the Peak Signal to Noise ratio (PSNR), which was of about 18 per image. It showed, the high potential of the Supersent approach and we were able to demonstrate the benefits for mapping and monitoring concepts for identification of shorelines and building footprints.
The described process is embedded in an automated workflow and is scalable.
The benefits of the Supersent workflow is, that it enables to access very-high resolution satellite imagery using the Sentinel-2 image archive. Since Sentinel-2 has a free&open data policy and high recording frequency it is an ideal source of information of manifold mapping and monitoring tasks. However, the super-resolution approach does not compete with those commercial satellite imageries which have native spatial resolution of 2m or better, but it serves as cost-effective image source for historic periods, which have not been recorded with those commercial sensors or where budget constraints do not allow for its use, such as found in many monitoring concepts.
Enhancing the quality of aerial and satellite imagery is one of the most prominent and challenging problems in remote sensing. The availability of low-cost, “fair-resolution” of Sentinel 2 imagery provided by international agencies such as ESA and future commercial space industries is exploding due to the deployment of new satellite constellations. The recent advances on the use of deep learning super resolution method in computer vision science have provided a considerable number of new ideas that EO satellite image can leverage from.
Based on our previous work of SuperResolution (SR) methods, where we implement a single Deep Learning Model & Architecture (SRGAN) to enhance the resolution of Sentinel-2 10m bands by a factor 4, the current methodology is based on the following achieved objectives:
• Overcoming the problem of the generation of pair of Low Resolution, High Resolution image needed for training
• Producing a single band SR image from a multiband (Sentinel-2 10m) with deep learning methods
• Extending the application of the PANSHARPENING methods to SR-Sentinel2 images
• Implementing potential edge detection on PANCHROMATIC SR images and forest delineation on PANSHAPENED SR-images for user needs
• Defining a scientific strategy to assess the validity, utility and reliability of the generated SR images by DL (DeepLearning) methods though xIA (Explainable IA) and EO Quality Image techniques (standard PNSR, SSIM metrics, supersite metrics, user utility)
• Targeting End-User EO monitoring applications on the agriculture, urban and forestry
The scientific backbone of the SR DL algorithm performs three tasks: Fusion, Enhance and Transfer:
• Fusion: The pair of images (LR, HR) use to train and test the SR model come from the same sensor at different resolutions. Therefore, the degradation of the HR images is provided directly by the sensor without any mathematical hypothesis. Satellite images from WorldView 3 at 1,2m resolution (resp 0.3 m) is employed for low (resp. high) resolution pairs.
• Enhance: The model learn at the same time to fuse 4 multi-bands at 1.2m low resolution (as like an RGBN image) into a single band and to increase the output resolution to 0.3m.
• Scale Invariance Transfer: The trained model is used to enhance the resolution of the 4 multi-bands of Sentinel2 images at 10m resolution to give SR images of 1 Band at 2.5m resolution.
The preliminary results can be show in the images below.
The images 1 show results of the DL model as previously described. It has been trained with pairs of (LR, HR) images from internal Pleiade dataset. From left to right: LR 4 bands at 2m, LR 1 band calculated with standard computer vision models at 2m, SR(x4) bicubic 1 Band at 50cm, SR(x4) from the DL Model 1Band (50cm), HR target 1 Band at 50cm. Standard PNSR, SSIM metrics are provided.
The images 2 shows the scaled invariant capabilities of the DL model. Ended, the model is able to generate a SR at 2.5m from an initial Sentinel2 4 bands at 10m. From left to right: LR 4 bands at 10m, LR 1 band calculated with standard computer vision fusion models at 10m, SR(x4) bicubic interpolation 1 Band at 2.5m, SR(x4) from the DL model 1 Band at 2.5m.
In this set of images, the boundaries of the SR 1 Band image are defined and sharped. The image is less blurred than the bicubic interpolation image. The assessment of the validity of these generated SR images is defined through measure
The Image Quality assessment is performed at two levels: a) An innovative reference-based image assessment through a simulation of products that controls the ground truth in order to deliver metrics in terms of MTF and SNR at equivalent resolution and b) with non-reference based image quality assessment metrics that can be included in the SR metadata catalogue. xIA is addressed by quantification of uncertainty of each SR pixel for interpretability of model results with the PIUnet architecture, developed by Polytechnical University of Torino.
Super-resolution is a common term for a variety of techniques aimed at increasing the spatial resolution of input low-resolution data. The goal of such operations may be either to enhance the attractiveness of visual material or to reconstruct the underlying true high-resolution information. Naturally, the latter is most relevant to remote sensing and Earth observation applications, and it can be achieved relying on information fusion performed from multiple observations of the same scene. In this talk, we will focus on showing new data fusion schemes implemented using deep networks for super-resolving Sentinel-2 multispectral images. Sentinel-2 mission provides a valuable source of multispectral imagery that can be exploited in manifold Earth observation systems. However, the spatial resolution of Sentinel-2 imagery is insufficient for some applications---in such cases employing super-resolution reconstruction as a preprocessing step may be pivotal in enabling the use of satellite images captured within that mission.
Recently, deep learning has been exploited for fusing multiple images presenting the same area of interest, resulting in several new deep architectures (including DeepSUM, RAMS, or HighRes-net networks) that emerged as an aftermath of the Proba-V Super-Resolution Challenge organised by European Space Agency. In our earlier works, we have demonstrated that these networks can be exploited in a band-wise manner to super-resolve Sentinel-2 images, trained either using the Proba-V data or from Sentinel-2 images with simulated low-resolution counterparts. On the other hand, there were also reported attempts, including the well-known DSen2 network, aimed at processing a single multispectral image to enhance the bands having the ground sampling distance of 20 metre and 60 metre up to the resolution of the 10 metre bands.
In this talk, we will report our research on combining these two approaches towards data fusion performed in the temporal and spectral dimensions. The former consists in fusing multiple images showing the same area and with the latter, we fuse multiple bands of a single multispectral image. The outcome is evaluated quantitatively for the simulated data, taking into account the band-wise reconstruction accuracy and the spectral characteristics of the reconstructed images to verify their radiometric quality. The obtained results indicate that the proposed architecture outperforms both the multi-image networks (i.e., RAMS and HighRes-net) and the DSen2 network that realises the spectral fusion. In addition to that, we evaluate our method qualitatively for the real data, as we feed it with original Sentinel-2 images. Overall, the obtained results clearly indicate that the new approach is competitive and allows for enhancing the super-resolution quality for all of the Sentinel-2 spectral bands.
The world population nowadays exceeds 7 billion and it is expected to be raised to 11.2 billion by 2100. This exponential increase together with the former unsustainable agricultural practices has generated significant challenges related to the increased food demand, producing an urgent need to develop technological tools and methods that could be aligned with the United Nations Sustainable Development Goals (UN-SDGs). According to the new Common Agricultural Policy (CAP) regulation schema, the Copernicus Sentinels’ mission programme offers a tremendous amount of data from which domains such as agricultural monitoring and food security could undoubtedly benefit. As we are becoming witnesses of this vast increase of the available Earth Observation (EO) data, the so-called “big-EO data challenge”, cloud-based platforms were proposed as the most prominent solution. However, despite all the abovementioned advantages, limitations such as the coarse spatial resolution of open-accessed data and the dense cloud presence, prevent researchers from making accurate estimations in small-parcel (< 100m2) dominated regions.
This way, the construction of High-Resolution (HR) satellite data from images at Low-Resolution (LR) seems an essential task that could lead to significant improvements in image quality and the generated products. The Super-Resolution restoration (SR/SRR) technology enhanced by the presence of the Deep Convolutional Neural Networks (CNN) has recently gained lots of attention, as it proved capable to generate reconstructed HR images at great quality, ensuring their superiority in terms of simplified application and robustness. Considering all the aforementioned technological investments and needs, this study introduces a CNN Super-Resolution scheme, which attempted to reconstruct spatial enhanced Sentinel-2 images and contribute to an improved estimation of small-sized agricultural objects and their properties.
In particular, an automated data harvesting system denoted as “Sentinel-2 downloading system” (S2-DwS) was constructed based on two state-of-the-art EO-cloud platforms, (a) the Software as a Service (SaaS) solution of Sinergise, called Sentinel Hub (SH), and (b) the encrypted and secured data storage of AWS (AWS Simple Storage Service-S3 bucket). With authenticated and authorised access to these environments, we collected atmospherically-corrected Sentinel-2 spectral bands at 10 and 20m in sub-tiled format, according to the defined regions of interest. Afterwards, we implemented proper image transformations and tiles aggregations based on the common sensing date, generating RGB-stacked datasets (i.e. B4-B3-B2, B5-B6-B7, and B8a-B11-B12).
Subsequently, an effective CNN scheme was employed, aiming to produce super-resolved Sentinel-2 datasets of the initial spectral bands of 20m and 10m. The image patches were divided into train, validation and testing datasets, with the training to occupy 80% of the total dataset, and the rest 20% to be splitted by half for the validation and testing, respectively. Each of the three datasets contained HR and the corresponding LR image patches. The adopted network was formulated with three convolutional layers where each convolutional layer was followed by batch normalization, using the Rectified Linear Unit (ReLU) as the activation function. The Mean Square Error (MSE) loss function was employed and trained with 15 epochs with Adam to be used as the optimiser. Concerning the evaluation performance of the training and validation processes of the CNN, an overall accuracy of 87% and 90% was achieved. Additional evaluation metrics of the Root Mean Square Error (RMSE), the Peak Signal-Noise Ratio (PSNR), and the Structural Similarity Index (SSIM) were also performed of the test set, revealing satisfactory results, close to the acceptance range.
Finally, through the third component, denoted as “Sentinel-2 Uploading system” (S2-UpS), an automated processing chain of data transformation and ingestion was formulated following the requirements of the SH “Bring your COG” Application Programming Interface (API) and the AWS S3 bucket. In specific, the super-resolution RGB-stacked images were un-stacked into single-band GeoTIFFs, converted into Cloud Optimised GeoTIFFs (COGs), and eventually ingested in the aforementioned repositories.
Both the S2-DwS and the S2-UpS processing schemes were developed to operate automatically leveraging on a time scheduler, thus enabling under a generalized concept to upload any kind of EO data (e.g. tests performed in drone data). The overall system was tested on the agricultural areas of Cyprus and Lithuania. Cloud-free S2 images were acquired for the training process dated in spring-summer months of 2020, as well as S2 images with lower than 10% of cloud coverage from March until September of 2021 were acquired for the inference process.
Europe’s efforts towards the implementation of the Paris Agreement for Climate Change (CC) are substantial. The earth observation data from the Sentinel program provide the required information to support CC adaptation and mitigation policies. Despite the abundance of data in GEOSS and their undeniable value, their uptake in the context of CC applications is often limited due to inherent spatial resolution constraints. To address this challenge, the EIFFEL Horizon2020 project will create tools to enhance the spatial resolution of earth observation data available in GEOSS to address the needs of five different CC adaptation and mitigation applications.
One of the satellite missions that can support the envisioned CC-related applications is Sentinel 2, which provides environmental monitoring of aspects such land use change (Phiri et al., 2020) and detailed forest health monitoring (Gupta and Pandey, 2021). Forest fire risk can be estimated through the computation of various spectral indices from Sentinel 2 bands (Sánchez et al., 2018). Additionally, Sentinel 2 data are efficient in inland water bodies mapping, which is crucial for continuous monitoring of water availability and prediction of floods or droughts (Bhangale et al., 2020). On the other hand, Sentinel 2 sensor acquires four bands at 10 m resolution, six bands at 20 m resolution, and three bands at 60 m resolution. Still, it is often required to have all bands available at the highest spatial resolution, to support more detailed and accurate information extraction (Lanaras et al., 2018). Given the wide range of CC-related applications that Sentinel 2 data can support, it is mandatory to develop an efficient tool for upsampling the 20 m and 60 m spatial resolution Sentinel 2 bands to 10 m.
Sentinel 2 bands can be upsampled to 10 m resolution with simple interpolation techniques, like bicubic and bilinear interpolation. However, these methods return blurry images with no additional high-resolution information. More sophisticated methods like deep-learning (DL) based super-resolution methods have demonstrated better performance. They can add as much as possible of the spatial detail included in 10 m fine resolution bands to the coarser bands. Most of the existing DL Sentinel 2 super-resolution methods are based on deep residual neural network architectures which can alleviate the vanishing gradient problem and can achieve a faster convergence speed during training (Lanaras et al., 2018; Palsson et al., 2018; Wu et al., 2020). These architectures adopt the global residual learning paradigm to predict the difference between a bicubic upsampled coarse-resolution image and an original reference high-resolution image instead of the actual pixel values. The lack of high-resolution reference images during training is addressed by reducing the resolution of images before training, by the resolution ratio between the coarse and fine bands. Therefore, the original coarse bands can be used as the reference and the downgraded as the input during training. This strategy is inspired by Wald’s protocol (Wald et al., 1997) and it is based on the assumption that the relationship between the observed resolution and the reduced resolution images also applies to the higher level. Thus, an unlimited amount of training data can be created by downsampling the observed Sentinel 2 images.
Given the ability to create training datasets without the need of high-resolution reference images, two different training approach have been established to train Sentinel-2 DL super-resolution models. The first training approach is to train the neural network with an extensive training dataset with global coverage such that it can generalize well across different climate zones and land-cover types, and can super-resolve arbitrary Sentinel-2 images without the need of retraining. Apart from the DSen2 (Lanaras et al., 2018) which was trained with a large dataset of Sentinel 2 Level 1C data, the remaining models (Palsson et al., 2018, Wu et al., 2020) were trained and tested on the same image and thus the training is quicker than the first approach. This strategy is used when a single specific image is needed to be super-resolved. However, the network must be retrained each time a different Sentinel 2 image must be super-resolved. Concerning that CC mitigation and adaptation applications will be applied to various areas with different climatic conditions and land cover types, the first training approach with the extensive training dataset was considered efficient because it is trained once and can infer arbitrary Sentinel-2 images without further retraining.
The performance of DL super-resolution methods can be sensitive to important training hyperparameters. Palsson et al. (2018) trained a deep residual network with a single image training dataset and found that the optimal number of training patches depends on the complexity of the single image used for training. While a large number of epochs does not improve the performance because the network converges quickly. The network appears also to be insensitive to the number of residual blocks and patch size. Unfortunately, a similar performance evaluation of super-resolution networks trained with large training datasets is missing from the literature. To fill this gap, several deep residual network architectures were trained with a single large training dataset that will be generic and will be able to super-resolve arbitrary Sentinel-2 images. For the training dataset we collected 45 Sentinel 2 Level 2A images from around the globe and acquired between March and October 2021, giving special care to include heterogeneous area. To achieve the optimum performance, we also focused on the effects of various training hyperparameters (number and size of patches, number of training epochs, batch size and number of residual blocks) on the quantitative and qualitative performance of the super-resolution models. The super-resolved images inferred by the most efficient and fine-tuned architecture were used as an input to a CC mitigation and adaptation application to investigate the contribution of super-resolved images to the enhancement of CC-related applications.
Summarizing our contributions, we have developed a large training dataset with a global distribution of samples to compare the performance of deep residual network architectures and their hyperparameters and to select the most efficient which has global applicability for Sentinel 2 Level 2A data without retraining. Additionally, the super-resolved images were used in a CC-related application to demonstrate their value in such applications. As a future work, this tool will provide super-resolved Sentinel-2 data as input to five different CC related applications which will be tested on real life scenarios. The actual enhancement of the application results will be the evidence of the value of Sentinel-2 DL super-resolution techniques in supporting CC mitigation and adaptation applications.
References
Bhangale, U., More, S., Shaikh, T., Patil, S., More, N., 2020. Analysis of surface water resources using Sentinel-2 imagery, Procedia Computer Science, 171, pp. 2645-2654.
Gupta, S.K., Pandey, A.C., 2021 Spectral aspects for monitoring forest health in extreme season using multispectral imagery, The Egyptian Journal of Remote Sensing and Space Science.
Lanaras, C., Bioucas-Dias, J.M., Galliani, S., Baltsavias, E., Schindler, K., 2018. Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network. ISPRS Journal of Photogrammetry and Remote Sensing, 146, 305-319.
Palsson, F., Sveinsson, J., Ulfarsson, M., 2018. Sentinel-2 image fusion using a deep residual network. Remote Sensing 10 (8).
Phiri, D., Simwanda, M., Salekin, S., Nyirenda, V.R., Murayama, Y., Ranagalage, M., 2020. Sentinel-2 data for land cover/use mapping: A review. Remote Sensing, 12, 2291
Sánchez Sánchez, Y., Martínez-Graña, A., Santos Francés, F., Mateos Picado, M., 2018. Mapping wildfire ignition probability using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain). Sensors 18(3):826.
Wald, L., Ranchin, T., Mangolini, M. 1997. Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images. ASPRS Photogrammetric Engineering & Remote Sensing 63, 691-699.
Wu, J., He, Z., Hu, J., 2020. Sentinel-2 sharpening via parallel residual network. Remote Sensing 12, 279.
Within the Copernicus Marine Environment Monitoring Service (CMEMS), the Italian National Research Council (CNR) – Institute of Marine Sciences (ISMAR) provides operational near-real-time Sea Surface Temperature (SST) products over the Mediterranean Sea. These consist of daily (night-time) merged multi-sensor (L3S) and optimally interpolated (L4) foundation SST fields provided over 1/16° and 1/100° regular latitude-longitude grids (i.e. at nominal high, HR, and ultra-high spatial resolution, UHR), covering the period from 2008 to present. All these products are purely based on satellite observations, provided by a variety of infrared sensors that also include the new generation of satellite radiometers, such as the Sea and Land Surface Temperature Radiometer (SLSTR) onboard the Sentinel-3A and Sentinel-3B satellites.
The CNR-ISMAR SST processing chain includes several modules, from data extraction and preliminary quality control, to cloudy pixel removal and satellite images merging. A two-step algorithm finally allows to interpolate SST data at high and ultra-high spatial resolution. This two-step process is necessary since the UHR optimal interpolation scheme makes use of HR L4 data (properly remapped onto a 1/100° regular grid) as first-guess.
CNR is presently working to improve the MED NRT SST products’ effective resolution and SST gradients’ accuracy. SST gradients are strongly connected with the ocean and the lower atmosphere dynamics. They are often associated with energetic motions at the mesoscale and submesoscale, and their connection with local changes in sea surface roughness, surface wind speed, up to the modulation of storm tracks has been extensively documented in the scientific literature. Intense SST gradient regions also result in enhanced primary production with corresponding higher stocks of phytoplankton. Moreover, the satellite-derived SST gradients turned out to be crucial for practical/operational applications like the improvement of the Altimeter-derived surface geostrophic currents distributed within CMEMS.
In the last decades, it has been widely shown that deep learning-based methods have the ability to obtain high quality results in the field of computer vision. Among the image processing techniques, the impressive performances obtained by using Convolutional Neural Networks (CNN) in the process of reconstructing high-resolution images from low-resolution ones, the so-called single image Super Resolution (SR) problem, have attracted much attention in a wide range of applications. In particular, in terms of processing accuracy, satellite-derived data for ocean remote sensing may significantly benefit from the potential of SR-CNN methods. Here we explore the achievements and the limitations in applying this specific class of artificial intelligence techniques in order to improve the effective resolution (especially the SST gradients) in the MED NRT L4 UHR product.
With the advancement in deep learning techniques, the improvement of the calculations methods and the deep learning-based super resolution models tested, the Super Resolution (SR) for satellite imagery have been actively explored providing good solutions in computer vision for a wide range of applications.
GEOSAT is the owner of GEOSAT-2, a good candidate for SR calculation due the world-wide big archive it has. In a first approach the SR calculation method was based on a Random Forest solution but after reviewing the problem derived of the reconstruct of a high-resolution image from a single low-resolution image a decision has been reached and was to focus on deep learning methods for SR tasks by using high resolution images as target and low-resolution ones as input. GEOSAT-2 is a very-high resolution (75 cm pan-sharpened) multispectral optical satellite, based on an agile platform for fast and precise off-nadir imaging and it carries a push-broom very-high resolution camera with 5 spectral channels (1 panchromatic, 4 multispectral) our target is to increase the pan-sharpened images into 40 cm, our methodology is based in large archive of very high-resolution imagery under the 50 cm.
Based on the preliminary SR results derived from a Random Forest algorithm and after a review of the state of the art for supervised Image Super-Resolution an exhaustive road map was designed to improve the initial calculation method. That road map is focusing the efforts on different approaches, as such as the following: First of all review the Model Frameworks: Pre-up sampling SR, Port-up sampling SR, Progressive Up sampling SR and Iterative Up-and-down Sampling SR. Among the Up-sampling Methods two different ways, Interpolation-based methods and Learning-based methods has been reviewed. A Network Design must be set, for this purposed Advanced Convolution, wavelet transformation and region-recursive were the selected ones, the reason, they are closely to the Learning Strategies that we consider as the most suitable ones for this kind of solutions. These two concepts are linked and with every calculation the network is learning, so we can consider the system as alive.
Enhanced satellite imagery is beyond its native resolution applying SR techniques. Our method can process GEOSAT-2 imagery into 40 cm of pixel resolution. The model is still under additional developments, so it can be considered as subject to be improved further. Future improvements are focusing the efforts into multi-tasking learning and network Interpolation calculation to achieve better results.
Since more and more amounts of remote sensing images are available, the Earth’s surface can be monitored at a large scale and with a high temporal resolution. In this context, an important task is the pixel-wise classification of land cover, i.e. the task of identifying the physical material of the earth’s surface for every pixel in an image. For this purpose, deep learning methods such as Fully Convolutional Neural Networks (FCNs) are successfully used. One challenge arises from the coarse resolution of the satellite data, which leads to difficulties in extracting fine structures such as roads and results in mixed pixels at object boundaries. These pixels will naturally contain more than one class, making it impossible to make a correct class assignment. In contrast to the images, the label data is often provided with a higher spatial resolution that allows assigning one pixel in the input image to several classes in the output. The usage of such label data has been shown to result in higher classification accuracies in several works, e.g. (Oehmcke et al., 2019), who use Sentinel-2 images with a ground sampling distance (GSD) of 10 m to predict different kinds of streets with a GSD of 5 m. By using a modified FCN with an additional upsampling layer to obtain the output resolution of 5 m they significantly improve the classification results for all classes. Ayala et al. (2021) also classify roads at a GSD of 2.5 m despite using an input having a GSD of 10 m. In contrast to Oehmcke et al. (2019), they modify the input of the FCN by an upsampling layer before the first convolutional block. By doing so, they can use a standard FCN structure for training and achieve promising results.
In this work, we want to investigate the usage of higher resolution training data for pixel-wise land cover classification and the way of its integration into the training procedure of a FCN. As the F1-score achieved for most land cover classes in previous experiments is already larger than 90%, the scope of this work focuses on the accuracy improvement of fine structures such as roads and of object boundaries. For this purpose, Sentinel-2 images with a resolution of 10 m and training labels with a resolution of 5 m are used. We compare two network architectures that are based on the approaches of Oehmcke et al. (2019) and Ayala et al. (2021). As a baseline, we use a normal U-Net structure with an input size of 128 x 128 pixels and 10 spectral bands from Sentinel-2. The first architecture has an additional upsampling and convolutional layer at the end of the decoder. This results in an output resolution that is doubled compared to the input resolution. For the second approach, the input image is upsampled to 5 m GSD before using it for training. This allows the usage of a U-Net architecture with skip connections also at the layer of the output resolution, which is not the case for the first architecture.
The dataset used in the experiments consists of all Sentinel-2 images from recent years for the whole area of the German federal state of Lower Saxony with less than 5% cloud coverage. We use four spectral bands with a resolution of 10 m and six additional bands with a resolution of 20 m. The used class labels are obtained from the official German landscape model ATKIS (AdV, 2008), which contains land use information in vector format. We merge these classes to the following six land cover classes: Settlement, Sealed area, Agriculture, Greenland, Water, and Forest and rasterize the vector data to generate reliable labels at a GSD of 5 m.
To compare the classification results we train a baseline model by using the normal U-net architecture and training data with 10 m GSD. This model achieves an Overall Accuracy (OA) of about 86% and a mean F1-score (mF1) of 75% on the test data. Using our first approach, i.e. the one with the additional upsampling layer and 5 m GSD for the training data, we get results with similar accuracies and almost no change in the F1-scores of the individual classes. In the second approach, the one with the upsampling operation before the encoder, the results improve by approx. 1% in OA and 2% in mF1. Especially the class Sealed area improved by 8% in the F1-score. For most of the other classes, there is a slight improvement between 1 – 3% in the F1-score. A visual inspection indicated that this was mainly due to better classification results for small objects.
As the first results are promising, we plan to conduct more experiments regarding the network architecture and also to investigate the usage of even finer training data, e.g. with 2 m GSD.
References
OEHMCKE, S.; THRYSOE, C.; BORGSTAD, A.; VAZ SALLES, M.; BRANDT, M.; GIESEKE, F., 2019: Detecting Hardly Visible Roads in Low-Resolution Satellite Time Series Data. In: 2019 IEEE International Conference on Big Data, pp. 2403-2312, DOI: 10.1109/BigData47090.2019.9006251
AYALA, C.; ARANDA, C.; GALAR, M., 2021: Towards Fine-Grained Road Maps Extraction using Sentinel-2 Imagery. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-3-2021 XXIV ISPRS Congress, pp. 9-14, DOI: 10.5194/isprs-annals-V-3-2021-9-2021
ARBEITSGEMEINSCHAFT DER VERMESSUNGSVERWALTUNGEN DER LÄNDER DER BUNDESREPUBLIK DEUTSCHLAND (ADV), 2008: ATKIS – Objektartenkatalog für das Digitale Basis-Landschaftsmodell 6.0. Available online (accesses 10 December 2021): https://www.adv-online.de/GeoInfoDok/GeoInfoDok-6.0/Dokumente/.
Application of satellite data for the nature purposes began with the launch of the first LANDSAT satellite on July 23, 1972. Since then, many imagin systems have been created, but still the multispectral data covering the entire planet with high temporal resolution is the domain of the government programs implemented by NASA (LANDSAT) and ESA (SENTINEL). The great advantage of the data provided by both systems is their universality and lack of costs, while the disadvantage is the spatial resolution.
Cloud-based Sentinel-2: Resolution Enhancer (s2enh) processor is a solution based on machine learning and artificial intelligence methods, that allow to improve the spatial resolution of Sentinel-2 level 2A imagery up to 2.5m with full spectral compliance for most of the available spectral channels, with simultaneous preservation of radiometric quality. The hybrid method based on the RDNN and classic photogrammetric solutions made it possible to increase the usefulness of the data for the purposes of continuous nature analyzes. The result of the processing is a multilayer geoTIFF file prepared on the basis of the Sentinel-2 L2A product. The usage of the solution gives the possibility to access current and historical data, acquired by the Sentinel-2 satellite in the so far unavailable resolution of 2.5m GSD.
The s2enh processor is available on the CREODIAS cloud platform, it is accessed via the Finder tool. Users can easily and quickly order the enhanced imagery of the Sentinel-2 L2A, by indicating the area of interest and the time period of the imagery acquisition. The Sentinel-2: Resolution Enhancer (s2enh) processor is based on serverless processing, that simplifies the form of service.
The obtained enhanced resolution brings many benefits for the further image processing and analysis. The solution allows for example for automated idetification of the objects, such as builidngs, narrow agricultural plots or roads, as it’s much easier to distinguish boundaries between the objects on the higher resolution imagery.
4 millions USD: the price of a single high-resolution snapshot of Darfur. The price that prevented Amnesty International to monitor at scale the civilian destruction of 21st century's first genocide. The price that reserves high-res EO applications for high net-worth customers, out of reach of organisations fighting our biggest problems. Do we want humanity's orbiting ingenuity to mostly monitor oil companies' assets and drive hedge funds financial predictions?
Meanwhile, ESA provides Sentinel 2 imagery, globe-wide, every few days, entirely free. But at 10x lower resolution.
We present here our open-source package and models for Sentinel 2 multi-frame super-resolution, funded by the ESA QueryPlanet project. We include our trained neural networks with a focus on lightweight inference. By merging multiple revisits, we can trade temporal resolution for spatial resolution, allowing multiple use cases requiring higher details of static structures, at zero imagery cost, by everyone.
Super Resolution is a method for artificially increasing the imaging system's resolution by post processing without having to collect new datasets. It is mostly developed and used in computer graphics by the computer science community for image and video enhancement due to its capacity to add spatial variations in the data and perform better than conventional interpolation methods such as bicubic interpolation been used extensively in the geoscience community. After the advancement of deep learning-based super-resolution methods in the 2010s, it has shown great potential for use in data scare regions where high-resolution geoscientific data are not available, and collection of such data is also not possible due to financial and technical reasons. Even though Super-Resolution is used geospatial data, the loss functions to optimize those models are mostly used as it is from computer vision, and they do not account for the spatial relationship between neighbouring pixels. For example, in the case of elevation data, the relative elevation between two pixels (slope) and their direction (aspect) is more important compared to absolute elevation in the case of geoscientific modelling. However, those relations which must be valid in the ground observation are not well considered in Super-Resolution of Geospatial data.
Our research aims to develop a loss function that can respect the spatial relationship between pixel and its neighbours, representing the ground reality. In contrast to existing methods, these new loss functions are better in generating the geospatial datasets that can be further used in geoscientific analysis and modelling approaches rather than mere visualization. The developed loss function is tested with multiple super-resolution models, both generative adversarial networks and the end-to-end models trained with geoscientific data. Our research shows that using spatial relation aware loss function and the super-resolution model can better reconstruct the ground reality even though training them is more complex than using simple mean squared error loss functions. The use of such a novel loss function can also generate better terrain in the case of Digital Elevation Models, which can be observed in the slope and aspect of such datasets.
Soil moisture content plays an important social, environmental and economic function. As the population grows and climate change accelerates, planning against water limitation will become an ever increasing imperative. High quality spatiotemporal information of soil moisture is therefore required to confront a variety of challenges including climate change, raising agriculture productivity, improving weather forecasting, drought and flood prediction, and water resources management.
Microwave remote sensing is the most accurate technique to retrieve soil moisture at regional and global scales due to its direct relationship between the water content and the soil dielectric constant properties, and its all-weather observation capability. Current satellite missions like SMOS (Soil Moisture Ocean Salinity) from ESA (European Space Agency) and SMAP (Soil Moisture Active Passive) from NASA (National Aeronautics and Space Administration) use the L-band frequency for soil moisture mapping. However, the passive microwave measurements that underpin these satellites is restricted to the top 5 cm moisture content with a resolution of around 40 km. Accordingly, the SMAP mission proposed to use L-band radar data to enhance the spatial resolution to better than 10 km.
Research has recently been underway to demonstrate the enhanced capability of P-band measurements for retrieving the soil moisture from a deeper layer, and to show that it is less affected by surface roughness and the overlying vegetation layer. Moreover, research has shown that it is possible to get more accurate soil moisture information over the rootzone when using a combination of L- and P-band data than from either band on its own. Accordingly, this paper proposes an approach to obtain accurate high resolution root zone soil moisture maps from combining L- and P-band radar and radiometer data. Three alternative approaches are being explored:
1. Passive-Passive: The P-band radiometer data are downscaled to the same resolution as the L-band radiometer data and are then combined to generate an accurate root zone soil moisture retrieval. This output is then further downscaled with the P- and L-band radar data to increase the spatial resolution.
2. Active-Passive: The radar data are used to downscale the radiometer brightness temperature data at P- and L-band, respectively. The downscaled radiometer brightness temperatures are then combined to generate higher resolution soil moisture maps for the root zone.
3. Active-Active: The L- and P-band radar data are used to estimate high resolution soil moisture for the root zone, which is then constrained by the radiometer data for further accuracy enhancement.
A series of airborne field campaigns are currently underway in Yanco NSW, Australia, simulating this satellite mission concept using the Polarimetric P- and L-band Multibeam Radiometers, and Polarimetric P-and L-band Imaging Synthetic Aperture Radars (SARs). Accordingly, a satellite-radiometer sized footprint is being flown every few days over three- to four-week long periods coincident with intensive ground sampling, including soil moisture monitoring networks, gravimetric soil samples, destructive vegetation sampling, and surface roughness measurement. Preliminary results from this new mission concept will be presented and discussed.
End-To-End mission performance simulators (E2Es) are software tools developed to support satellite mission preparatory activities. For passive optical remote sensing missions, E2Es generate synthetic and realistic scenes simulating the interaction of the solar radiation between the atmosphere and the surface; therefore allowing the estimation of the mission performance before its launch. One future mission in the ecosystem of the Sentinel program from ESA is the Copernicus Hyperspectral Imaging Mission, CHIME. The CHIME E2E simulator has been used in the A/B1 phase of the mission to assess the scientific and technical feasibility of the project, and consolidate the data processing algorithms and evaluate the requirements of the actual mission.
One of the key pieces of CHIME's E2Es processing chain is the Scene Generation Module (SGM), which provides the ground truth synthetic images as they would be observed by the satellite instruments. The CHIME SGM is a very high throughput software with a state of the art parallelization technology, that generates 100x200km hyperspectral scenes in less than an hour, including topographic effects, shadow projecting clouds, detailed surface definition and realistic atmosphere. The CHIME SGM can simulate the largest orbit path with full swath, complying with the future satellite specifications, in a matter of hours. This high performance is due to the producer-consumer design and a clever use of parallel computing.
The enhanced production of the CHIME SGM paves the way to perform time series studies with large revisits previous to the actual mission using simulated data, and also allows exploring whole orbits, which is very computational demanding in terms of simulating the natural variability of scenes, as in surface properties (topography, soil spectra, vegetation types), atmospheric interaction (absorption/scattering of gases and aerosols, clouds and atmospheric properties), and illumination and observation angles.
These time series allow detailed studies such as varying observation and illumination angles over a snow scene with realistic topography, in order to simulate the behaviour of the sensor in saturation and noise conditions. Taking into account that the snow is not a lambertian surface, the areas where specular reflection or hot spots could saturate the instrument, and therefore this simulation is important to have a robust instrument design ongoing.
In this paper we present the CHIME SGM as a tool to generate highly detailed, full swath and complete synthetic time series, in a fast way. The generated time series is composed by a year of daily TOA scenes generated in a high altitude, realistic topography with snow, to observe the variation of the illumination and observation angles. The scene is in the Alps, with a 100km x 200km area corresponding to the CHIME full swath. The natural variability of the surface is generated using snow spectra from spectral libraries and available snow BRDF models, the topography is generated using ASTER digital elevation model for the scene coordinates, the atmospheric functions are generated by MODTRAN and the atmospheric parameters are extracted from ECMWF meteorological global maps for each date and location. Each scene generation takes one hour and a half to complete and 146GB of storage space in disk. The whole year daily time series is dense (365 scenes), compared to the proposed revisit time of 10-12 days of CHIME mission (30-36 scenes).
With this time series, we can deliver full swath realistic snow scenes to assess the performance of the sensor in extreme conditions, where the reflectance of the snow combined with the illumination angles could cause TOA radiances larger than the maximun of the instrument's design. In this way, instrument saturation under maximum radiance levels can be properly investigated under realistic conditions, allowing a quantitative characterization of mission requirements and a proper evaluation of the performances.
SATURN (Synthetic AperTure radar cUbesat foRmation flyiNg) is a full Italian Technology Demonstration Mission consisting of a train of 3, 16U-CubeSats equipped with miniaturized SAR Instruments.
The main objective of the SATURN mission is to demonstrate the key technology “Cooperative Multiple-Input-Multiple-Output (MIMO) Swarms of SAR CubeSats” for innovative, low cost and versatile Earth Observation capabilities. As a “first step”, SATURN would enable the demonstration of the key technology with 3 satellites, to be then easily scaled up to full performance capability. The demonstrative swarm with 3 CubeSats will achieve an Imaging Resolution of 5x5 m with a Swath of 30 km.
SATURN aims at becoming the first ever Space SAR MIMO Mission, introducing a new paradigm in SAR Earth Observation. Leveraging on the MIMO concept, SATURN allows: 1) low-cost, scalable SAR missions for a quick approach to space for private and public entities; 2) distribution of the key resources, normally concentrated in a single (large and complex) satellite, among small-sized and simpler systems, thanks to the proper combination of the signals from each single node of the swarm, that is the basic element of a distributed and reconfigurable SAR antenna; 3) overcoming of the single point of failure of one single large satellite.
The proposed technology innovation will provide the following new observation capabilities in the full-sized version:
• Single Pass Interferometric Observation: 3D imaging with near real-time provision, no impact of scene de-correlation.
• Modular Performance: the achieved image quality depends on the number of satellites; the imaging performance is improved by increasing the swarm size rather than increasing the single node complexity.
• Constellation of Swarms deployed on different orbital planes allows high revisit time.
SATURN Industrial Organization is the following:
• OHB ITALIA: prime contractor, mission prime, provider of the CubeSat 3M platform and ground segment, responsible for the development of the SAR payload antenna deployment mechanism
• ARESYS: subcontractor of OHB ITALIA and responsible for the SAR payload, MIMO SAR data processing and the payload Ground Segment
• Politecnico of Milano: subcontractor of OHB ITALIA and responsible for the mission requirements and data scientific exploitation
• AIRBUS ITALIA: subcontractor of OHB ITALIA and responsible for the payload antenna design and manufacturing.
The following outcomes are expected from the implementation of the SATURN mission:
• Science: enabling the in-flight experiment of applications that would be hardly obtainable with today’s Missions; referring to the Earth Explorer concepts, the SATURN Mission would be a “super Earth Explorer”, as it would several experimental campaigns thanks to its great versatility and reconfigurability. For example, several swarm architectures (along-track train, cloud, across-track stack, etc.) may be defined in different Mission Phases, to privilege a certain application (e.g. high-resolution, single pass interferometry, multi-static SAR observation).
• Technology: demonstration of the MIMO paradigm, advances in miniaturization and development of novel SAR instruments.
• Applications: enhanced real-time 3D imaging and shorter revisiting times with respect to “big satellites” (e.g., Sentinel-1), particularly suitable to search-and-rescue all-weather services.
Coastal and inland aquatic ecosystems are of fundamental interest to society and economy, given their tight link to urbanization and economic value creation. They play a significant role in the carbon cycle, and they comprise critical habitats for biodiversity. Aquatic ecosystems are continuously impacted by natural processes and human activities. Many of these impacts become more frequent and severe, particularly with increasing population and climate change. Hence, there is a need (i) to generate reliable, robust and timely evidence of how these environments are changing, (ii) to understand processes causing these changes and their societal, health, and economic consequences, and (iii) to identify steps towards conservation, restoration and sustainable use of water and dependent ecosystems, and resources.
Systematic, high-quality and global observations, such as those provided by satellite remote sensing techniques, are key to understand complex aquatic systems. While multitudes of remote sensing missions have been specifically designed for studying ocean biology and biogeochemistry as well as for evaluating terrestrial environments, remote sensing missions dedicated to studying critical coastal and inland aquatic ecosystems at global scale are non-existent. Thus, these ecosystems remain among the most understudied habitats on the Earth’s surface. Specific reasons for such an observational gap lie in the dynamic and optical complexity of water ecosystems, in combination with technological challenges to optimize the relevant spatial, spectral, radiometric, and temporal characteristics. Current and forthcoming missions are either not suited to provide a global coverage (e.g., PRISMA, EnMAP) or to obtain reliable data over dark waters (e.g., carbon-rich lakes) due to inadequate radiometric sensitivity (e.g., Sentinel-2/MSI). They also fall short of requirements for characterizing biodiversity variables such as benthic habitat structure and phytoplankton assemblages due to their inadequate spatial and spectral resolution, respectively (e.g., Sentinel-2/MSI, Sentinel-3/OLCI). Similar limitations exist for wetland ecosystems, which compromises their management and protection.
A future satellite mission, the so-called Global Assessment of Limnological, Estuarine and Neritic Ecosystems (GALENE), was proposed to ESA’s Earth Explorer 11 call to respond to current and future challenges linked to coastal and inland ecosystems. GALENE will provide optimized measurements of these aquatic ecosystems, and enable an adaptive sampling of dynamic properties and processes in water columns, benthic habitats and associated wetlands. GALENE will thus fill a major gap by comprehensively quantifying the state of Earth’s water bodies and aquatic ecosystems. It will substantially contribute to solving global water challenges, including water pollution and ensuring clean drinking water supply for all and protecting coastal environments and populations. The GALENE mission concept consists of a synergy of three innovative instruments, namely a hyperspectral sensor, a panchromatic camera and a polarimeter. The GALENE science objectives and main technological features will be presented.
Earthscanner (Jilin-1KF01B) - the world largest swath optical sensor at submeter resolution was successfully launched on July 3rd, 2021, replacing its namesake as the satellite the world's largest sub-meter optical remote sensing satellite. It inherits the mature technology of Jilin-1KF01 satellite, featuring high resolution, large width, high speed storage, high speed data transmission and other features. Its image acquisition width (swath) is greater than 150km, and it can acquire more than 2 million km² of high-definition images every day. The EarthScanner (JL1-KF01 & Jilin-1KF01B) satellite constellation is uniquely equipped with a highly capable sensors designed to capture 0.5 m panchromatic and 2 m multispectral imagery with an extra wide swath of 150km (Jilin-1KF01B) and 136 km (Jilin-1KF01), compared to traditional Earth Obesrvation (EO) sensors with a swath of 12 to 20km.
The continuous imaging capacity of the sensor is distinct, reaching 4 200 km in one strip. Traditional satellites generally have hundreds of millions of pixels in an image, while an image of EarthScanner satellite reached 18.2 billion pixels.
It is well known that it is difficult to expand the swath width as the resolution decreases. In other words, when the field of view increases, the detail viewing capability decrease. Therefore, the camera of wide swath satellites uses a completely different optical system, the "Three Mirror Anastigmat" optical system, also known as the off-axis three-mirror system. While the focal length of the optical system of wide swath satellites reaches 4.85 meters, the field of view reaches 16.1°, and the imaging quality reaches the diffraction limit in the full field of view.
The development of such optical system is probably one of the most challenging development in the history in every element of space optical system development. The technical team had broken through four key technologies:
- Large, high-stability and ultra-light camera structure
- High-precision support of special-shaped aspheric mirror
- Ultra-large-scale high signal-to-noise ratio photoelectric signal processing
- Large-scale off-axis three-reverse detection and adjustment
These high-performance sensors each have a distinct image capacity of 1.3 million km² per day, nearly double the capacity of traditional 50cm resolution sensors such as Pleaides (50cm resolution) with 700,000 km² and Komposat-3A (55cm resolution) with 277,992 km² per day.
These sensors are capable of fulfilling emergency tasking to capture a very large areas within few minutes providing significantly reduced acquisition times due to its wide swath and image capacity. Some countries such as Qatar and Lebanon can be fully captured with one strip.
The Daedalus mission has been proposed to the European Space Agency in response to the call for ideas for the Earth Observation programme’s 10th Earth Explorer (EE10). It has completed a Phase-0 Science and Requirements Consolidation study. The overarching goal of Daedalus is to fundamentally advance our understanding of the energetics, dynamics, and chemistry of the atmosphere-space transition region, and of the neutral-plasma interactions that shape it. The mission design for Daedalus targets to perform in-situ measurements of plasma density and temperature, ion drift, neutral density and wind, ion and neutral composition, electric and magnetic fields, and of energetic particles in the Lower Thermosphere-Ionosphere (LTI) region, targeting in particular the 100-200 km range. In this presentation the science case for Daedalus is presented and the science and mission objectives are outlined, summarising key findings from the Phase-0 study.
The Monitoring Nitrous Oxide Sources (MIN2OS) satellite project aims at monitoring global-scale nitrous oxide (N2O) sources by retrieving N2O surface fluxes from the inversion of space-borne N2O measurements that are sensitive to the lowermost atmospheric layers under favorable conditions. MIN2OS will provide emission estimates of N2O at a horizontal resolution of 1° x 1° on the global scale and 10 x 10 km2 on the regional scale on a weekly to monthly basis depending on the application (e.g., agriculture, national inventories, policy, scientific research). Our novel approach is based on the development of: 1) a space-borne instrument operating in the Thermal InfraRed domain providing, in clear sky conditions, N2O mixing ratio in the lowermost atmosphere (900 hPa) under favorable conditions (summer daytime) over land and under favorable and unfavorable (winter nighttime) conditions over the ocean and 2) an atmospheric inversion framework to estimate N2O surface fluxes from the atmospheric satellite observations. After studying three N2O spectral bands (B1 at 1240-1350 cm-1, B2 at 2150-2260 cm-1 and B3 at 2400-2600 cm-1), a new TIR instrument will be developed, centered at 1250-1330 cm-1, with a resolution of 0.125 cm-1, a Full Width at Half Maximum of 0.25 cm-1 and a swath of 300 km. To optimally constrain the retrieval of N2O vertical profiles, the instrument will be on-board a platform at ~830 km altitude in a sun-synchronous orbit crossing the Equator in descending node at 09:30 local time in synergy with two other platforms (Metop-SG and Sentinel-2 NG) expected to fly in 2031-32 aiming at detecting surface properties, agricultural information on the field scale and vertical profiles of atmospheric constituents and temperature. The lifetime of the MIN2OS project would be 4-5 years to study the interannual variability of N2O surface fluxes. The spectral noise can be decreased by at least a factor of 5 compared to the lowest noise accessible to date with the Infrared Atmospheric Sounding Interferometer-New Generation (IASI-NG) mission. The N2O total error is expected to be less than ~1% (~3 ppbv) along the vertical. The preliminary design of the MIN2OS project results in a small instrument (payload of 90 kg, volume of 1200 x 600 x 300 mm3) with, in addition to the spectrometer, a wide field and 1-km resolution imager for cloud detection. The instruments could be hosted on a small platform, the whole satellite being largely compatible with a dual launch on VEGA-C. The MIN2OS project has been submitted to the European Space Agency Earth Explorer 11 mission ideas.
Synthetic Aperture Radar Tomography (TomoSAR) provides an unprecedented opportunity to characterize volumetric environments such as forested areas using 3-D electromagnetic reflectivity maps. Classical 2-D SAR imaging capabilities can be extended to 3-D using acquisitions performed from slightly shifted trajectories, and a coherent synthesis along an additional aperture in elevation. As shown by experiments based on the use or airborne SAR sensors, TomoSAR and its multi-polarization version, PolTomoSAR, is able to characterize various kinds of forests (tropical, temperate, boreal) and may be used to estimates forest height, above ground biomass, underlying ground topography, canopy structure…
However, the application of TomoSAR using spaceborne devices is hindered by the time lag separating successive SAR acquisitions, whose value, on the order of a few days, depend on orbital considerations and on laws of physics. For radars operating at higher carrier frequencies, i.e. at L, S, C, X, Ku bands and above, the correlation time over vegetated environments rarely exceeds hours or minutes, limiting the 3-D analysis through repeat-pass TomoSAR to temporally stable targets, such as those encountered in urban scenarios. A possible solution to this limitation consists in using single-pass interferometers, consisting of two or more SAR sensors measuring, at the same time, the observed scenes from different positions, i.e. in a bistatic configuration. Simultaneous SAR acquisitions permit to solve highly limiting problems related to temporal decorrelation, whereas slight modifications of the relative trajectory between the sensors allows to describe an aperture in elevation and to successfully apply SAR tomographic focusing. Another advantage of this operation mode is that the acquisition of a tomographic stack may be spanned over a large period of time, provided that the structure of the observed medium does not change drastically, i.e. generally months.
This paper illustrates the principles of incoherent bistatic tomography, shows the different processing steps of this technique, which significantly differ from the ones employed to perform repeat-pass TomoSAR. Relevant solutions to operational challenges, linked to the imperfect knowledge of the scene geometry, irregular baseline sampling, and even missing data are presented and validated using airborne data sets using geophysical parameter estimation procedures from the state of the art.
Theoretical aspects will be complemented by an analysis of real data from the ESA campaign TomoSense, where bistatic data were collected at L- and C- Band over the forest site of the Eifel Park, North-West Germany, by flying two airplanes in close formation. Preliminary analyses at L-band were already carried out and gave promising results. A new model-based approach for 3-D reconstruction was developed and implemented; the outcome was then compared to the tomographic profiles produced by standard repeat pass tomography. The structure of the forest was properly recovered in both cases. Quantitative analyses about the visibility of the ground and the forest height were also carried out; the discrepancy of the forest height with respect to the LiDAR map resulted in about 2.3m (1 sigma).
This paper presents the analyses performed for a novel formation flying study in Low Earth Orbit (LEO), in the direction of future Earth Observation (EO) missions. The Formation Flying L-band Aperture Synthesis (FFLAS) mission concept is proposed by the European Space Agency, and it is carried out by Airbus Space and Defence for the system design and by Politecnico di Milano for the orbit design and formation flying aspects. The scientific application of the FFLAS mission concept comes from the outcomes of the Soil Moisture and Ocean Salinity (SMOS) satellite mission and the current capabilities of distributed space systems. The study addresses the future needs for a range of application over land and oceans, proposing potential way to significantly improve the spatial resolution at L-band interferometer. In this context, it is known that global maps of soil moisture and sea surface salinity are required to improve meteorological and climate prediction, as demonstrated by SMOS. In view of future L-band missions for land and oceans applications, the spatial resolution should improve from 40 km, as for SMOS, to 1-10 km. High-resolution measurements are vital for current and future EO data processing, to improve the scientific monitoring of the Earth for weather forecast measurements or climate change effects. The geophysical parameters, soil moisture for hydrology studies and salinity for enhanced understanding of ocean circulation, are both vital for climate change models.
Potential ways to increase the aperture size are, for example, a larger physical aperture of the instrument itself, or the concept of multiple L-band antenna, working as a network of sensors. The FFLAS mission concept focuses on the latter strategy and proposes a formation of satellites mounting L-band antenna, which can work as nodes of a network of sensors. This provides a combined interferometry solution and, selecting proper geometrical configurations among the satellites, could provide an improvement in the spatial resolution. Moreover, the geometry of the antenna influences the side lobe levels for the interferometry solution, and the hexagonal array geometry is selected to improve the SMOS single satellite performances.
The FFLAS study envisions the possibility to deploy a formation of three identical satellites, mounting hexagonal L-band antenna arrays, with a diameter of about 8 m. Starting from the scientific requirements of L-band interferometers, some analyses are carried out on the combined interferometry solution to select the possible formation flying configurations. The baseline geometry is selected for the two main operative phases, the scientific observation and the payload calibration phases. The former requires that the satellites are positioned at the vertex of an equilateral triangle of 12.4 m side, to improve the spatial resolution up to 9.8 km. This configuration is maintained along the orbit throughout the scientific phase. The maintenance of the tight - rigid - formation during interferometry activities is achieved by a continuous control thrust with electric propulsion engines. The latter requires a formation reconfiguration manoeuvre to properly move the satellites in a new rigid formation in Cold Sky Pointing configuration. This operation is required once per month and a continuous control thrust is implemented to perform the manoeuvres and to maintain the formation geometry for about 15 minutes for payload calibration.
The main technology challenge is the design of an accurate and precise control and navigation systems for the formation, given that the inter-satellite distance is in the order of 10 meters. This results in critical safety and collision avoidance considerations for the study. Robust strategies to guarantee formation maintenance and eventual safe-mode transition are vital for the overall safety and feasibility of the mission. Moreover, an error in the relative position among the satellite could result in a loss in the combined interferometry solution and some considerations are included in the study to show the coupled effect between the control accuracy and the spatial resolution.
This work presents the possibility to design an accuracy in the control of the relative position among the satellites in the order of 1 to 10 centimetres. Moreover, the control accuracy is correlated to the absolute and the relative navigation solution. Controlling the satellites in the centimetre level requires a navigation solution even more accurate. This is a big challenge for the FFLAS mission study, and the analyses performed result in the selection of a GNSS-based navigation, which could be improved adding some optical sensor to the navigation filter solution. A preliminary analysis shows the possibility to have an accuracy in the order of 1 to 2 centimetres (1σ) for the real-time navigation solution, and millimetre level knowledge of the baselines in the ground processing.
The study aims at providing a simulation environment for the FFLAS mission, to verify the main operative phases and the potentials from both guidance, navigation and control and payload point of view. The relative motion is analysed through high-fidelity simulations to assess the realistic performances of the nominal FFLAS mission operations. The methodology could apply to other multi-satellite missions in similar orbital scenarios, relying on GNSS navigation and mounting L-band interferometer payloads. Moreover, the use of continuous control thrust from electric engines could support different types of synthetic aperture applications, based on tight formation flying satellites in non-Keplerian orbits. In conclusion, the FFLAS study could open the path for new mission concepts in the LEO region for soil moisture and ocean salinity studies, with high-resolution measurements, exploiting L-band synthetic aperture arrays.
The Sentinel-1 Next Genertion System shall be characterized by two main observation payloads: a C-Band SAR and an AIS, jointly operated to provide continuity and enhancements to Copernicus services. The mission will deliver global coverage of all accessible land every 3 days with image resolution better than 25 m2 and a swath width of at least 400 km. On top of these key features, challenging image quality, radiometric performance and pointing capabilities are required, as well as highly accurate vessels detection. Furthermore compatibility with VEGA-C launcher is considered as a driver for the definition of the system.
The requirements for the main imaging mode are considered as drivers to define the range of possible solutions for the overall SAR system design. In particular, the requirements of resolution, swath width, NESZ and DTAR combined together pose a major challenge and lead to consider a range of solutions within the framework of the so called “High Resolution Wide-Swath” techniques.
During the Phase A Study, different SAR Instrument concepts have been investigated to cope with the aforementioned requirements. The following candidates were selected as the most promising solutions to support the Mission and System requirements: a SAR Instrument equipped with a Directly Radiating Antenna (DRA), and a SAR Instrument equipped with a Large Deployable Reflector (LDR) and an active feed array.
The DRA solution foresees a planar active phased array antenna with multiple channels, which allows obtaining the required swath width and resolution by flying at a similar orbit height as Sentinel-1 First Generation. Several mission scenarios around a similar (or the same) orbit as Sentinel-1 have been studied. The LDR solution is based on the use of a Large Deployable Reflector (LDR) antenna in combination with a multichannel active feed array. This solution is optimised when flying at high altitudes: in fact, exploiting the high gain ensured by the LDR, for a fixed size of the reflector, the length of the feed array can be reduced at high orbits since a smaller range of look angles is needed for the beams in order to illuminate the required swath.
These two classes of solutions will be described at system level, highlighting the implications of the different orbit altitudes and taking into account also possible opportunities of synchronization with Sentinel-1 First Generation satelltites and ROSE-L constellation.
Microwave sensors, both active and passive, are particularly suitable for observing polar regions because of their insensitivity to solar illumination and cloud coverage. However, most microwave sensors are sensitive to surface or near-surface properties because of their frequency of operation. Beginning in 2009, measurements of L-band radiometers (ESA SMOS, NASA Aquarius and SMAP) have provided the possibility of deriving deeper internal properties of ice sheets and sea ice as a result of the improved penetration capability at 1.4 GHz. It is estimated that such sensors are sensitive to about 30-40 cm for first year sea ice and to the upper 500-750 m of ice sheets, allowing the estimation of sea ice thickness (SMOS Sea Ice Thickness product, 2021) and ice sheet internal temperature profiles (Macelloni et al., 2019). The development of improved techniques for mitigating radio frequency interference in L-band radiometric missions has further led to the idea of using lower frequencies for monitoring the polar regions. A first airborne prototype (the Ultra-WideBand software defined RADiometer -UWBRAD) was developed in the US under a NASA -ESTO project led by The Ohio State University to observe brightness temperature spectra in the range 0.5-2 GHz (Andrews et al., 2017). Successful airborne campaigns in Greenland and Antarctica demonstrated the potential of this technique for inferring information on sea ice and internal ice sheets (Andrews et al., 2017, Yardim et al., 2020, Jezek et al 2018). Based on these promising results and the capabilities of the space industry, the CryoRad mission was proposed to ESA’s EE11 call. CryoRad consists of a single satellite hosting a single payload: a wideband, low-frequency microwave radiometer that explores the frequency range 0.4 GHz - 2 GHz with continuous frequency sampling, specifically designed to address scientific challenges in polar regions. The capability of CryoRad’s low frequencies to explore greater depths in ice sheets and sea ice, as well as their enhanced sensitivity to ocean salinity, allows the measurement of specific key parameters that contribute to the three mission objectives: (i) improve understanding of the processes controlling the mass balance and stability of ice sheets and ice shelves, their current contributions to global sea-level rise, and their impact on future sea-level rise; (ii) improve sea surface salinity in cold waters to provide new insights into the freshwater cycle and water mass formation at high latitudes; and (iii) characterize sea ice growth and salinity exchange processes in the Arctic and Antarctic. In particular the primary science products of CryoRad are: the ice sheet and ice shelf temperature profiles of Antarctica and Greenland from surface to the base; the presence of englacial or basal water; estimates of sea surface salinity (SSS) that will aim at reducing uncertainties in cold waters of current L-band spaceborne radiometers; improved estimates of sea ice freshwater and salinity fluxes by improving current capabilities in estimating sea ice thickness, in particular in the range 0.5- 2 m, and new information on sea ice salinity that has never previously been available from spaceborne remote sensing. CryoRad’s science products include Essential Climate Variables that can be directly assimilated into weather, climate and Earth System models. Special attention in the instrument design is devoted to address the radio frequency interference (RFI) expected at these frequencies using knowledge developed in previous missions and the results obtained from the airborne campaigns. The instrument observes at nadir using circular polarization in order to avoid Faraday rotation effects. The CryoRad swath is 120 km, with a spatial resolution on the ground that varies from 45 km at 0.4 GHz to 8 km at 2 GHz. The average revisit time is 3 days at latitudes higher than 60° and about 10 days at the equator. The mission benefits from synergies with other sensors operating in the same frequency range in both passive (i.e. SMAP, CIMR, SMOS and any SMOS follow on, COSSM) and active (Biomass, Rose-L) modes, as well as other complementary missions like CRISTAL, Sentinel-1, Sentinel-2, and higher frequency microwave radiometers (e.g. MetOP-SG). CryoRad will open a new era in microwave radiometry and will provide new insights and capabilities to address multiple high-priority science questions beyond its main science objectives. While the mission was not recommended for implementation under the EE11 programme (mainly for cost reasons), it was recognized as of great interest and high scientific maturity and was indicated as a commended mission. Initiatives are on-going to improve its scientific and technological readiness both in the US and Europe. The mission concept and recent results obtained from airborne campaigns will be presented at the meeting.
The technological trends related to recent and future space-borne Synthetic Aperture Radar (SAR) missions are characterized by the implementation of advanced acquisition modes and/or optimization strategies mainly aimed at achieving wider illumination swaths without impairing significantly the azimuth resolution [1]
For instance, by referring to the next ROSE-L mission [2],[3], the simultaneous exploitation of different advanced acquisition techniques, namely ScanSAR [4], Scan On Receive (SCORE) [1] and Digital Beam Forming (DBF) on receive [5] will be experienced to achieve an L-band SAR system with a very wide range swath. Note, in particular, that the implementation of the ScanSAR and SCORE techniques requires the re-configurability of the radar antenna pattern along the elevation angle. Moreover, the DBF technique needs that the radar echo is simultaneously received by separate antennas, each of them representing the terminal of a different receiving channel. To achieve the latter goal, according to the current ROSE-L design, the radar antenna consists of 5 aperture antennas deployed along the flight direction. These 5 panels cooperate in the TX mode behaving as an array antenna, whereas they work separately in the RX mode. As a matter of fact, the effective implementation of these ScanSAR, SCORE and DBF techniques does not involve particularly demanding constraints on the shape of the azimuth pattern of the TX radar antenna.
In this work, we show that shaping of the azimuth pattern of the TX radar antenna can be implemented to achieve additional system capabilities, with respect to those obtained through the above mentioned ScanSAR, SCORE and DBF techniques. We investigate how to take benefit from the availability of the 5 azimuth panels of the ROSE-L antenna array to shape the illuminated azimuth beam in such a way to enable advanced ScanSAR capabilities. More specifically, we show that by properly acting only on the distribution of the input excitations of the 5 array panels, which means without changing the current antenna architecture and retaining the same geometrical resolution of the current ROSE-L system design, we can obtain a significant mitigation of the scalloping effect, typically affecting the ScanSAR acquisitions [6], or we can move from a single azimuth look configuration to a two-look one, thus opening the possibility to profitably take advantage of these new scenarios in different applications. The paper will present the results of the study as well as performance trade-offs resulting from shaping the azimuth beams.
It is finally highlighted that the results of the presented analysis, although tailored to the ROSE-L case study, can be easily extended to other systems, thus representing a valuable instrument for the design of future SAR missions. In this regard, further validation of the presented analysis through in-orbit experiments with the ROSE-L system would be helpful.
References
[1] A.Moreira, P.Prats-Iraola, M.Younis, G.Krieger, I. Hajnsek, K. P. Papathanassiou, “A tutorial on Synthetic Aperture Radar,” IEEE Geoscience and Remote Sensing Magazine, pp. 6-43, March 2013.
[2] M. Davidson, N. Gebert and L. Giulicchi, "ROSE-L – The L-band SAR Mission for Copernicus," EUSAR 2021; 13th European Conference on Synthetic Aperture Radar, 2021, pp. 1-2.
[3] M. W. J. Davidson and R. Furnell, "ROSE-L: Copernicus L-Band Sar Mission," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 872-873, doi: 10.1109/IGARSS47720.2021.9554018.
[4] K. Tomiyasu, Conceptual Performance of a Satellite Borne, Wide Swath Synthetic Aperture Radar. IEEE Trans. Geosci. Remote Sens. 1981, 2, 108–116.
[5] N. Gebert, G. Krieger, A. Moreira, “Digital beamforming on receive: Techniques and optimization strategies for high-resolution wide-swath SAR imaging,” IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 564–592.
[6] G.Franceschetti, and R. Lanari, “Synthetic Aperture Radar Processing”, CRC PRESS, New York, NY, USA, 1999.
Radar altimetry is a remote sensing technique that measures the distance between the satellite and the target surface. This data allows to measure surface height, which can be used in a large number of applications, namely the study of sea level, ocean-surface topography, ocean dynamics, sea-ice, and small in-land water bodies e.g. rivers or lakes, among others.
One of the challenges of altimeter missions is that the beam of the radar is very narrow, and as a consequence, the measurements are sparse both in space and in time. A solution for this problem is to launch more altimeters that would observe different areas at different times. Technological advances in radar altimeters as well as on spacecraft platforms open the possibility of designing smaller and more compact instruments, which, in turn, would lead to lower cost per unit and enable the creation of medium size altimeter constellations. Another alternative is a smaller number of spacecraft carrying swath altimeters, which is an novel instrument concept that relies on radar interferometry at near-nadir incidence to provide high spatial resolution in a wide swath. This instrument is more complex and costly than the one of an altimeter. However, to achieve the same revisit and coverage, a smaller number of spacecraft than in the case of radar altimeters is necessary. Moreover, it provides a high spatial resolution in a relatively wide swath, which can enable new applications.
This work addresses the selection of the optimal phasing of a constellation of 2 to 12 satellites in a Sun-synchronous orbit. Such optimal phasing depends on the spatial and temporal observation requirements of the ocean features to observed. This study is dedicated to the mesoscale ocean features and, consequently, the we target a revisit time of 5 days with a spatial sampling of 50 km. For each resulting constellation configuration, the main characteristics are derived: intra-track distance and global coverage ratio, in 5 days and in 10 days. Operational constrains are also discussed and taken into account to derive the most advantageous orbital phasing. Several orbits can be selected for such altimeter constellation. Each with its advantages and disadvantages. In that context, placing the constellation in a Sun-synchronous (SS) orbits has clear advantages from the point of view of maximization of on-board power and simplicity of the spacecraft design. For the radar altimeter constellation, a SS repeat ground-track orbit with 385 revolutions per 27 days emerges as a suitable choice. This orbit provides measurements up to 81 deg latitude, with about 100 km sampling at equator. With more satellites, it is possible to reduce the sampling at equator and increase the temporal sampling. For instance, with 12 satellites at different phasing, it is possible to achieve a revisit between 4 and 5 days and a spatial sampling of about 50 km at equator. Moreover, since the repeat cycle and cycle lengths of this orbit are the same as of the Sentinel-3 mission, the same ground-track can be selected allowing for continuity of the hydrology measurements and increase of the sampling frequency. For the swath altimeter constellation, previous studies have identified a SS repeat ground-track orbit with 245 revolutions per 17 as a good candidate.
To continue the long term altimeter time series initiated by TOPEX/Poseidon in 1992, the new constellation needs to be judiciously cross-calibrated with the existing missions, namely Sentinel-3 and Sentinel-6. This work discusses multiple strategies for calibration and validation of the new constellations. The mean-local-solar-time (MLST) of the Sentinel-3 mission (22:00) has been selected primarily because of its optical instruments. A constellation of altimeters does not have to be operated in the same orbital plane. In fact, a dawn-dusk/dusk-dawn orbit has a number of advantages in terms of spacecraft design, which can be very be key to reduce the cost per unit. However, the drawback of orbiting in a totally different orbital plane than the one of Sentinel-3 and of of Sentinel-6 missions is that a tandem phase between the new constellation of altimeters and the existing spacecraft is not possible. However, the phasing can be selected such that two of the spacecraft of the constellation over-fly the same area as Sentinel-3A/B with a few seconds difference at the high latitudes where the two orbital planes intersect. This will occur once per orbit (about 101 min) for the constellation of radar altimeters and it provides opportunities for cross-calibration. Moreover, the first altimeters in the constellation can be cross-calibrated using transponders and corner reflectors. In particular, if the new radar altimeter constellation is placed on the Sentinel-3 mission ground-track, the transponder infrastructure of ESA's Permanent Facility for Altimetry Calibration in Gavdos/Crete can be leveraged for its calibration activities.
Simulations of very low baseline stereoscopic products in the frame of Sentinel HR mission
Jonathan Guinet, Julie Brossard, Julien Michel, Renaud Binet
French spatial Agency CNES is conducting a phase 0 study to explore the Sentinel-HR concept : an optical satellite in the metric spatial resolution class providing global, repetitive, systematic nadir observations with an open access policy. The minimum configuration would have four bands (blue, green, red, near-infra-red), and the ability to make systematic single pass stereoscopic observations. Foreseen applications are very wide, including land cover monitoring, change detection, … Moreover, stereoscopic capabilities will provide Digital Surface Models (DSM) temporal series with an intra-annual revisit, which will be very interesting for numerous applications including glaciers monitoring. One of the explored instrumental concept relies on Venµs, a French-Israeli optical micro-satellite delivering 5 meters imagery in visible and near-infra red bands, which is equipped with a redundant row of detectors in the focal plane allowing to estimate a DSM for each acquisition. Venµs has therefore a very narrow stereoscopic angle, which is known to be challenging for stereoscopic reconstruction. In [1] A. Rolland and al. presents a stereoscopic pipeline which gives very interesting results leading to DSM estimated noise of approximately 5m RMSE.
In the frame of the phase 0 study, we investigated the sentinel-HR stereoscopic capabilities of this instrumental concept, using simulated images acquisition with foreseen mission specifications.
This study rely on CNES tools 3D simulator, which generates aliasing free [8] satellite acquisitions given a 3D textured mesh and geometric point of view. These simulations helps us to demonstrate the sentinel HR stereoscopic capability, and its limits. Our simulation pipeline is composed of following steps :
• Geometrical model simulation : we generate geometrical physical model with sentinel HR specifications : low earth orbit (LEO), Sentinel 2 like, pointing low, 2m GSD, B/H 0.035 North/South.
• Stereo pairs rendering : we use the CNES 3D simulator to make sentinel HR rendering. At this step we can use raw geometrical image (ideal sentinel HR image without radiometric and geometric perturbations), and radiometric simulations, which consists in conversion in radiance TOA, noise and instrument PSF approximation derived from Venµs given by CESBIO [9] .
• Stereo reconstruction : CARS [2][3][4], a CNES open source tool dedicated to produce from satellite imaging by photogrammetry, is then used to produce DSM. A very precise correlator is needed to deal with very low B/H, in fact in our context 1 pixel disparity corresponds to a shift of approximately 60 meters in altitude estimation. Thus, the underlying CAR’s correlator PANDORA [5,6] has been tuned in this study to achieve the best results.
• Analysis : Absolute errors have been estimated by comparison of generated DSM with CARs and 3D simulator DSM ground truth (generated using a 2.5D rasterisation of DSM). DEMcompare [7] open source CNES tools have been used for this purpose.
This pipeline has been developed on CNES high performance computing platform using distributed treatment (Dask for CARS and OpenMP/MPI for the 3D simulator).
Varied landscape have been selected for test purpose, for each one set of stereoscopic couple have been simulated, one with raw geometric mode, another with radiometric effects.
Examples of qualitative and quantitative evaluations is given below for 3 sites in France :
• Nanterre : dense cityscape with high buildings
Good urban area reconstruction is observed. DSM approximated noise is 3.3m RMSE and 5.3m at the last decile of mean errors. Illustration 1 gives an example of stereo reconstruction, from left to right : the ground truth, DSM generated using raw geometrical images, DSM generated using radiometric simulations.
• Canari (Corsica) : two areas have been selected urban scattered and pit
Slope discontinuity are reconstructed. Very small building are lost. DSM approximated noise is 2.8m RMSE and 4.6m at the last decile of mean errors
• Argentière Glacier :
Preliminary results gives DSM approximated noise is 3.5m RMSE and 5.6m at the last decile of mean errors.
References :
[1] Rolland Amandine, Binet Renaud, & Bernardini Nicolo. (2019, November 7). DEM generation from native stereo Venµs acquisitions. COSPAR, Tel Aviv, Israel.
[2] Michel, J., Sarrazin, E., Youssefi, D., Cournet, M., Buffe, F., Delvit, J., Emilien, A., Bosman, J., Melet, O., L’Helguen, C., 2020. A new satellite imagery stereo pipeline designed for scalability, robustness and performance. ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
[3] Youssefi D., Michel, J., Sarrazin, E., Buffe, F., Cournet, M., Delvit, J., L’Helguen, C.,Melet, O., Emilien, A., Bosman, J., 2020. Cars: A photogrammetry pipeline using dask graphs to construct a global 3d model. IGARSS - IEEE International Geoscience and Remote Sensing Symposium.
[4] https://github.com/CNES/cars
[5] Cournet M., Sarrazin E., Dumas L., Michel J., Guinet J., Youssefi D., Defonte V.,
Fardet Q., "Ground-truth generation and disparity estimation for optical satellite imagery", ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020.
[6] https://github.com/cnes/pandora
[7] https://github.com/CNES/demcompare
[8] Moisan Lionel, R&T CNES : Simulateur de couples images stéréoscopiques de synthèse, , Université Paris Descartes, 2010
[9] https://labo.obs-mip.fr/multitemp/
ESA’s Soil Moisture and Ocean Salinity mission, SMOS, in operation since November 2009, is producing global maps of soil moisture and sea surface salinity with an average resolution of 40 km. In the context of a future L-band mission, it is necessary to address the future needs for a range of applications over land and ocean that call for much enhanced spatial resolution, down to 1-10 km. With today’s knowledge, the spatial resolution of a radiometer can be improved only by increasing its aperture size. In this context Formation Flying L-Band Aperture Synthesis (FFLAS) mission focuses on the study of aperture synthesis at L-band using formation flying as a potential way to increase the spatial resolution significantly. FFLAS mission concept consists of 3 hexagonal antenna arrays, of about 7 m in diameter (slightly smaller size than SMOS), each antenna hexagon with 24 receivers per side and flying with their centres at the vertices of an equilateral triangle of about 13m side. Such rigid formation would be equivalent to an aperture of 21 m diameter achieving 9 km nadir resolution with an effective sensitivity better than SMOS.
This paper presents the FFLAS mission concept comprising a novel formation flying for remote sensing mission in low Earth orbit. FFLAS formation flying strategy and mission phases are presented from launcher separation up to deorbiting, including formation flying acquisition and control, formation flying geometry reconfigurations for payload calibration as well as autonomous passive safe formation acquisitions in case of collision risk on board alert detection. Additionally, the paper demonstrates the technical feasibility and possibility to deploy a formation of three identical satellites, mounting each an L-band payload for land and ocean applications, in a single shared launch configuration. A preliminary S/C design is shown focusing on a) the design of a deployable mechanical structure being able to accommodate the hexagonal L-Band antenna array and b) the selection of the sensors and actuators needed to maintain the rigid formation control during science observations and.
Main design decisions are presented. In particular, the overall mechanical architecture has followed several design iterations so as to be able to provide the required performances with reduced mass. Equipment accommodation has been performed so as to comply with the launcher requirements in terms of CoG in stowed configuration and to ease the on board guidance and relative navigation function in terms of inertia and CoG variations. The primary structure is formed by a twelve segments joined together by means of hinges. Each satellite in a deployed configuration are clamped to the launcher dispenser by means of two launch-locking-devices that restrain the twelve segments so as to protect their deployment mechanism from unwanted loads during launch. The launch and locking devices also initiate the deployment of the structure once the last launch-locking device disengages it from the launcher. Additionally, an original deployment system consisting of six sliding axis mechanism with a deployment angle of 180º and six conic rolling mechanisms composed of two single axis hinges with a deployment angle of 90º is design from kinematics and structural point of view. Such FFLAS design configuration allows the accommodation of three FFLAS satellites inside the Ariane 62 launcher in terms of volume, mass and performances.
Concerning relative navigation sensors, the baseline as per Formation Flying analysis assumes relative navigation to be performed by GNSS carrier data differentiation on board. GNSS raw data will be exchanged between the satellites though a dedicated RF ISL (additional to the optical link required for payload raw data exchange). Other dedicated relative navigation sensors such as laser based positioning sensors could be introduced to improve accuracy but current estimations assume that the required relative navigation and control performances can be achieved by GNSS raw data exchange. Electrical propulsion has been selected as the main actuator for formation flying autonomous operations in order to cope with the required delta-V above 2 Km/sec for rigid formation control over 10 years. With the aim of optimizing the satellite design in terms of mass, electrical propulsion has been studied to suit the overall FFLAS mission aspects (i.e. not only FF orbital control but also launcher injection errors correction, station-keeping and collision avoidance, as well as deorbiting maneuvering). A baseline electrical propulsion architecture has been defined to cover the mission needs. Electrical propulsion drives the electrical power system design, that has been sized accordingly with appropriate sizing margins in terms of accommodation in the spacecraft and in terms of electrical performances.
The presentation of the preliminary S/C design is accompanied with the main satellite and mission estimated key performance figures.
The SMOS HR mission: a high resolution L-band interferometric radiometer
Authors:
Asma Kallel(1), Thibaut Decoopman(1), Laurent Costes(1), Jean-Claude Orlhac(1), Nicolas Jeannin(1),
Thierry Amiot(2), Cécile Cheymol(2), Louise Yu(2),, Raquel Rodriguez-Suquet(2), Patrice Gonzalez(2),
Nemesio Rodriguez-Fernandez (3), Eric Anterrieu (3), Yann H. Kerr (3)
(1) Airbus Defence and Space / ADS, 31 rue des Cosmonautes, 31402 Toulouse, France
(2) CNES, 18 Avenue Edouard Belin, 31400 Toulouse, France
(3) CESBIO, 18 avenue Edouard Belin, 31400 Toulouse, France
Abstract
SMOS is the first mission using interferometric radiometry to measure Soil Moisture and Ocean Salinity from space. SMOS synthetic aperture is Y shaped and composed of 69 antennas providing, since 2009, 2D maps with a spatial resolution of 30-55 km and a snapshot accuracy of 2-5 K. SMOS-HR (High Resolution) is a follow-on mission proposed to insure the continuity of scientific and operational products. It also addresses the scientific needs by reducing the spatial resolution down to 15 km while maintaining the radiometric sensitivity for geophysical applications to less than 1 K and with the same revisit.
The SMOS HR instrument is a cross-shaped interferometer composed of four 8 m arms. The arms include the structural elements, the deployment mechanisms and the actuators and are designed to meet the stiffness required for accurate antenna location knowledge. This is mandatory to control the phase for the acquired signals and to minimize error in image reconstruction. This antenna array size has been selected to achieve an improved spatial resolution (15 km at nadir as a maximum) while keeping or improving the radiometric sensitivity (~1 K). Indeed the increase of the synthetic aperture size (which should degrade the sensitivity) is compensated by the increase of the visibilities redundancy thanks to this novel geometry. In addition the sun-synchronous orbit altitude (679.4 km), the interferometer tilt (~20°) and the antennas elements spacing (~0.95λ) have been selected to answer to the scientific needs of a 3 day revisit, a ground swath > 900 km and a maximum incidence angle (alias-free) of about 60°.
Another improvement in this instrument is the RF Interference (RFI) mitigation. SMOS and other L-band scientific missions (such as SMAP) are detecting several emitters from earth that disturb the measurement leading to corrupted or unusable data. In order to avoid the observation discontinuity a new RFI detection and mitigation technique has been proposed for on-board processing.
Following these requirements, the SMOS HR instrument consists of 171 elements distributed on the four arms (about 40 elements on each) and on the hub (the cross center). Each element is an L-band antenna receiving H and V polarizations at the same time (in contrary to SMOS) followed by 2 receiver chains dedicated for each polarization that amplify and filter the signals and then digitalize them as complex samples. It is proposed for SMOS HR to take advantage of the whole ITU band by enlarging a bit the filter bandwidth (w.r.t. 19 MHz for SMOS) in order to improve the radiometric sensitivity. However in the same time, its Q factor has to be very good (at least the same as in SMOS) in order to well reject out-of-band RFIs. The digital signals of each receiver output are transmitted to a central calculator in which all the on-board processing is performed. This calculator located in the hub is in charge of computing the complex correlation of each pair of antenna signals over the instrument integration time and the detection and filtering of RFIs. A breadboarding of a part of the calculator is ongoing at Airbus.
In order to provide necessary signals to each receiver, a preliminary architecture distributing common local oscillator, sampling clock and calibration signals to all of them is under study (instead of several as in SMOS design leading to rapid phase variation between antennas). The signals will be mainly distributed through optical links to save mass and provide robustness for electromagnetic compatibility.
The onboard interferometric and radiometric calibration strategy establishment is ongoing, taking into account the lessons learnt from SMOS and the new architecture parameters of SMOS HR.
The presentation will provide an overview of the SMOS HR instrument objectives and design, through the instrumental architecture and the different trade-offs, and present the performance prediction assessment.
The Arctic region, often viewed as an early indication system for many aspects of climate change, has been recently undergoing alarmingly increasing temperatures, retreating sea ice cover and record low ozone concentrations in the winter. At the same time, the availability of data, even simple meteorological measurements, is severely limited in the Arctic. Observations from satellites operating in highly elliptical orbits (HEO) are particularly well suited for Arctic monitoring and can address the current sparsity in spatial and temporal coverage of the polar regions by geostationary and polar-orbiting satellites. The Arctic Observing Mission (AOM) is a satellite mission concept currently under study by the Canadian Space Agency in partnership with Environment and Climate Change Canada. AOM would use a HEO to enable frequent observations of meteorological variables, greenhouse gases (GHGs), air quality and space weather over northern regions. These observations are important for operational weather forecasts, environmental monitoring and scientific research aligned with key Government of Canada priorities.
A meteorological imager in HEO would support global Numerical Weather and Environmental Prediction (NWEP) by providing key information over the Arctic such as atmospheric motion vectors and brightness temperature observations. Northern GHG observations would improve our ability to detect and monitor changes in the Arctic and boreal carbon cycles, including CO2 and CH4 emissions from permafrost thaw. Air quality observations would enhance our ability to monitor anthropogenic emissions and mid-latitude pollution transport, which will improve air quality forecasts. Space weather observations would support operational space weather forecasting to protect valuable space-based assets and improve our scientific understanding of solar-terrestrial interactions.
International collaboration and partnership is vital to AOM’s success. Improved meteorological and space weather observations of the North are of interest to the US and Europe, with NOAA, NASA and EUMETSAT participating in early mission development activities with Canada.
The AOM is currently undergoing a 2-year pre-formulation study (PFS) with several important activities scheduled to be completed by early 2024. The PFS will refine the options for the mission architecture, such as the number of satellites, orbits and other technical and design aspects. In parallel to the technical studies, the socio-economic benefits of the mission will also be investigated and the roles and contributions of potential partners such as NOAA, NASA or other organizations will be refined. This presentation will provide an update on plans and progress of the Canadian-led AOM mission in the ongoing effort to produce high quality quasi-geostationary northern Earth observation and space weather data for the free and open use by the international community.
One of the major limitations in the use of optical remote sensing is cloud contamination. This is especially critical in time-sensitive applications such as agriculture that require data at specific observation dates or the implementation of regular time series. In recent years, one mitigation strategy was offered by increasing the available data through the use of multiple satellites in a constellation such as the Copernicus programs’ Sentinel-2 missions or the fleet of small satellites provided by Planet and other companies. Still, in many regions cloud-free observations are difficult to obtain due to very frequent cloud cover throughout the year (e.g. in the Amazon region) or in specific (often crucial) portions of the season (e.g. monsoon regions) [1]. This is even more true for older systems such as the Landsat missions, often rendering them basically unusable in these areas.
To successfully develop processing methods, particularly those based on machine learning and artificial intelligence, however, even smaller areas of undetected cloud cover can cause issues and confuse the model learning process. Therefore, the topic of cloud detection has been addressed for many decades now, most famously through established software such as the Fmask cloud and shadow detection algorithm for Landsat and Sentinel-2 products [2].
The topic of cloud removal, however, is gaining interest only rather recently. A major reason for this is that many traditional approaches are relatively simple and rely on replacing missing parts of the image based on adjacent regions or previous observations. This often leads to sub-optimal results that have limited value for subsequent analyses. More advanced and more powerful approaches are being explored but are often limited by excessive needs for processing capacity. Traditional processing can reach its limitations here, even when run on a high-performance compute cluster.
The most promising candidate for addressing very hard computational problems is quantum computing. Quantum computing exploits quantum mechanical phenomena for information processing by encoding data in terms of quantum bits (qubits). This offers great computational power but also requires a different kind of thinking than in the domain of digital computing. The latter may explain why quantum computing has still not yet received widespread attention in the area of earth observation. As of this writing, two major obstacles for a broader engagement with the topic likely are as follows: 1) quantum mechanical phenomena such as superposition, entanglement, tunneling, decoherence, or the uncertainty principle appear to be unintuitive; 2) the mathematical tools and notation used to model these phenomena differ from those required for classical analysis methods. More precisely, the computational problem at hand has to be phrased as unitary or Hamiltonian evolution of quantum states [3]. Moreover it is by far not obvious if and how a quantum speedup can be achieved—this crucially depends on the actual encoding that we use to map our problem into the quantum system. Acknowledging these difficulties, we explain how to mathematically phrase the cloud removal problem such that state-of-the-art quantum computing devices can process it. We follow the Probabilistic Graphical Model (PGM) methodology described in [4]. To this end, we explain the following three concepts:
1. How EO image series are interpreted as variables in discrete, high-dimensional, statistical models.
2. Why this type of model can benefit from quantum computation.
3. How the model is mapped into the state space of a quantum system.
Sentinel-2 data consists of measured intensities I at different locations S and times T. Thus, for each pair (s,t) ∈ S × T, the variable x(s,t) represents observations, e.g., pixel data, at that specific location and point in time. Here, T is the length of the image series. Clearly, image series consist of millions of pixels. Due to limited resources, algorithms usually process a subset of pixels at a time. In the PGM approach, pixel subsets are selected from the image series via a fixed graphical structure. The overall task is then to estimate the full joint probability mass function over all vertices of the graphical structure. This model can then be applied to fill gaps and to identify noisy or cloudy regions in the data. In the simple case, the graph is chain-structured. That is, each pixel subset contains data from one pixel at the very same location at different points in time. In a slightly more sophisticated cross-structured model, a fixed neighborhood of pixels is added at each point in time. Moreover, stochastic dependencies considered between pixels are deduced from the edges of the graphical structure. Restricting the conditional independence structure is a computational simplification. Statistically, more complicated graphical structures, e.g., with circles or larger cliques, result in models with higher accuracy. However, those models would be infeasible on classical digital computers, since the computational complexity grows exponentially with size of the largest clique of the graph. Today’s Noisy Intermediate-Scale Quantum (NISQ) devices do not yet deliver enough resources, in terms of qubits, to realize very large models. Nevertheless, they already allow for high-order dependencies between variables.
While qubits exhibit quantum mechanical effects, the data that we can read out from them is still binary (0 or 1). Each pixel’s data is high-dimensional and consists of measurements from several spectral bands. This is not an issue for digital computers, since it is known how to create large memories for storing millions of bits. However, the number of qubits is highly limited. Thus, for a proof of concept, we propose a rough discretization of each pixel’s values into 16 colors. Hence, 4 qubits suffice to encode the color of a single pixel. Assuming a chain model with T time steps, n = 4T qubits are required to encode the joint state of the probabilistic model. The corresponding quantum system is then fully described by the following Hamiltonian matrix: H_(Q, q) = ∑_(ij)Q_(ij)σ_(z)^(i)σ_(z)^(j) + ∑_(i)q_(i)σ_(z)^(i), where σ_(z)^(i) is the Pauli-Z operator acting on qubit i and Q_(ij) as well as q_(i) are weights which are determined via quantum maximum likelihood estimation [5,6]. Moreover, the weights Q_(ij) and Q_(ji) are fixed to 0 when the edge (i,j) is not contained in the graphical structure. During learning, the function L(Q,q) = ∏_(x ∈ D) Tr((1/Z(β))exp(−βH_(Q, q))Π_(x)) is minimized with respect to (Q,q) for some fixed β. Here, Z(β) = Tr (exp(−βH_(Q, q))) is a normalizing constant, also known as partition function, Π_(x) is the element of a positive operator-valued measure (POVM) that corresponds to the data point x, and Tr is the trace function. Moreover, 𝒟 is the training set that contains N fully observed pixel subsets as described above. Estimation with partially unobserved data is possible, based on the expectation maximization principle [7].
Due to extensive computational requirements, quantum computation can lead to a benefit for the following three sub-tasks:
1. The partition function Z(β) is required for parameter learning. Its computation is #P-hard. nevertheless, quantum gate approaches for that problem exist [8]. In addition, quantum annealing can be applied to solve the maximum posterior (MAP) inference problem (see below). Classical algorithms for estimating the partition function based on 𝒪(nlogn) MAP queries are also available [9].
2. The NP-hard MAP problem corresponds to computing the pixel values that achieve the largest probability. This task is used for the actual cloud removal. If specific pixel values are unknown, e.g., those covered by a cloud, the model will predict their most likely values. Formally, the problem is max_(x) Tr(exp (−βH_(Q, q))Π_(x)), e.g., computing Z(β) is not required. It can be addressed via quantum annealing as shown in [10]. An alternative for solving the optimization problem via quantum gate devices is the Quantum Alternating Operator Ansatz and related approaches [11,12].
3. The third sub-task that is amenable to quantum computation is the computation of the gradient’s loss function. It has been shown in [13] how this problem can be addressed via quantum annealing. General methods for quantum gate devices are not known at the time of writing.
Our results on an IBM Falcon quantum processor and a D-Wave Advantage quantum annealer suggest, that quantum-based cloud removal can deliver reasonable results already today, and may outpace classical methods within the next 5 years.
References
[1] doi:10.1080/01431160010006926
[2] doi:10.1016/j.rse.2019.05.024
[3] isbn:978-1-10-700217-3
[4] doi:10.1109/DSAA49011.2020.00069
[5] doi:10.1103/physreva.64.024102
[6] doi:10.1063/1.1381908
[7] url:http://www.jstor.org/stable/2984875
[8] url:https://arxiv.org/abs/2110.15466
[9] url:http://proceedings.mlr.press/v28/ermon13.html
[10] doi:10.1007/978-3-030-43823-4_29
[11] url:https://arxiv.org/abs/2012.13453.
[12] doi:10.3390/a12020034
[13] url:https://arxiv.org/abs/1510.06356
Like humans, spacecraft in future crowded environments must respond quickly and accurately to important stimuli under severe energy constraints. Vision systems for space will need to become orders of magnitude faster, while maintaining low edge power consumption. Neuromorphic engineering principles hold significant promise for providing speed improvements while also enabling low, “on-demand” system-level energy consumption.
Neuromorphic vision sensors, also known as Dynamic Vision Sensors (DVS), operate differently from conventional sensors. Instead of capturing full (or partial) frames with a single exposure time, each pixel operates independently from the others (continuous exposure). This continuous comparison occurs in an analog circuit, consuming very little continuous power. When the incident light intensity changes beyond a certain threshold, an asynchronous event is generated containing the address of the pixel, a timestamp, and a 1-bit up/down (brighter/darker) polarity indicator. The sensor output is thus a very fast ( < 1 ms latency), sparse stream of address of pixel intensity change events, hence the generic “event-based vision” term for this technology.
Interest in DVS for space-based edge sensing applications has been increasing. In March 2021, the first DVS was launched to space, an iniVation DAVIS240C sensor. The payload was included on a cubesat built in a collaboration between Western Sydney University, the University of Zurich, UNSW Canberra Space, and the Royal Australian Air Force on a Rocket Lab M2 satellite mission. A subsequent launch to the ISS is expected to occur in late 2021 or 2022.
The most appropriate processing algorithms and architectures for handling DVS data depends on the particular use case. Here we consider a few examples: star tracking, relative spacecraft positioning, and atmospheric observation.
Visual star tracking is characterized by a need for high-resolution sensing to maximize accuracy, combined with a sufficiently high sampling rate to detect fast rotations and low response latency. Using DVS in this situation requires high resolution and low noise, while speed is of only moderate importance relative to the extremely high inherent DVS speed. Beyond LEO, the vision sensor is viewing a mostly black background with small active patches, leading to relatively low data rates. In this situation, the most suitable DVS processing architecture, have event-based characteristics, typically using a conventional CPU. Spiking neural architectures can also work well with this type of very sparse data, although neural visual positioning algorithms are still at early stages of development. A related application is the detection and tracking of space junk, which has even higher demands on sensor resolution and noise.
Relative spacecraft positioning is important for situations such as docking and controlled formation flight. In these situations, very high detection robustness and accuracy is critical. Here, high-frequency flashing LEDs (in infrared or visible light) can be used to identify key points on spacecraft extremely robustly. With three or more known LEDs on a spacecraft visible at a time, the 3D pose of the spacecraft can also be calculated in real-time. Our initial tests, using an IR LED modulation frequency of 1250 Hz with a wide-view 90 degree lens, indicate that we can reliably track a single 5 mm 20 mW LED at a range of over 20 m, using very little CPU. Adaptations of this system will work at arbitrary distances ranging out to multiple km, and it is also possible to encode address information into the pattern of LED flickering.
Atmospheric observation is generally characterized by a need for high spatial resolution, and less need for temporal resolution. One notable exception to this rule is in the observation of space lightning (“sprites”), which are fast, rare events. Work on this topic is planned by the University of Western Sydney. Observing sprites using DVS requires continuous processing to detect the lightning events hidden in a large amount of other data, caused by the continuous motion of the spacecraft relative to the Earth. An ideal sensor for this application would capture both frames and events to enable the sprites to be placed in an overall context. Most of the events will be due to predictable relative motion of the Earth and the spacecraft, and can thus be filtered out, while sprites will stand out as anomalous events. Our Dynamic and Active-Pixel Image Sensor (DAVIS) captures both events and frames in parallel, enabling this filtering to occur on-board and thus greatly reducing the amount of bandwidth required for off-board transmission.
While DVS technology holds significant promise for space edge applications, a number of challenges remain before widespread deployment can occur. What is clear is that DAVIS technology, combining frames and events, is essential for multi-purpose sensing in space. In addition, sensor resolutions need to be improved from the current state of the art of around 1 megapixel. Sensor noise must also be reduced, to minimize the amount of “useless” processing of noise events. This is particularly important in space, where the objects being detected may be very small or near the detection threshold. Our ongoing next-generation sensor and processor developments are addressing these challenges, as well as including a number of other features. These features will enable the step from the current proof-of-concept studies to real-world deployments of DVS/DAVIS technology in space.
(Image: University of Western Sydney, UNSW Canberra Space)
The amount of data coming from Earth Observation missions is reaching such large volumes and the data is reaching such large resolutions that there is a need for new Artificial Intelligence techniques to classify these data. Quantum computing is a novel technology to develop these techniques. Quantum computers are becoming available for development and testing purposes. There is a race going on between the vendors to deliver the computer with the most qubits. IBM has for instance just announced a 127-qubit system at the time of submission of this abstract. The vendors in the field promise 100s or even 1000s of qubits over the next few years. One of the advantages that quantum computing offers over classical computing is that the number of qubits necessary to encode these data grows with O(log N), while the number of bits needed to encode the data grows with O(N). This means that above a certain amount of EO data, a classical solution is not possible anymore, and only a quantum solution is feasible.
Sebastianelli et al have shown the usage of hybrid classical-quantum convolutional neural networks for remote sensing imagery classification. Cong et al. have proposed a novel type of quantum convolutional neural network with a quantum convolutional and a quantum pooling layer. This convolutional neural network is fully quantum and has a doubly exponential reduction in the number of needed qubits and in the number of parameters. We will describe the usage of this quantum convolutional neural network for remote sensing imagery classification. We will show results on experimental earth observation datasets.
We use a development workflow based on the following sequence:
Atos myQLM -> IBM Qiskit -> IBM or IonQ quantum machines
By using the Atos myQLM front end, we are able to develop quantum code that is not tied to a particular architecture. This front end allows us to generate code for the IBM, IonQ, Google and Rigetti architectures.
The advantage of quantum computing for Earth Observation are its promise to tackle larger problems than feasible with classical computers. In addition to the exponential reduction in time and resources, it promises also to avoid getting stuck in relative minima due to quantum tunneling. Our workflow allows developers to write applications that are not bound to one hardware vendor.
During the last years, Artificial Intelligence (AI), and in particular Deep Neural Networks (DNNs), were applied onboard satellites because of the possibility to extract actionable information with low-latency. Indeed, researchers have been investigating onboard AI for quasi-real-time detection of catastrophic events, such as wild fires, floods, oil-spills, or applications requiring minimal responsetime, such as ship-detection and others.
Given the significant computing requirements of DNNs, AI algorithms were implemented onboard AI accelerators in technology demonstrators missions, such as the Intel Movidius Myriad Myriad 2 Vision Processing Unit in Φ-Sat-1. Despite that, one of the main
challenges to perform classification through AI algorithms on board satellites for multi-spectral and hyper-spectral images concerns the L0 to L2A pre-processing chain, which is still more demanding in terms of computation and power compared to the inference itself. For instance, preprocessing of multi-spectral and hyper-spectral images on board satellites can take about tenths of Watts and
minutes/tenths of minutes compared to the 2-10 W and fractions of seconds/seconds demanded for DNN inference.
Because of that, we investigate an END2END model that would allow one to make inferences directly on L0/L1 images and would, thus, avoid energy and time-demanding pre-processing steps, enabling the use of DNNs on board satellites with higher duty-cycles. This would enable the porting onboard satellites with small shape-factors of applications requiring high-duty cycles and for which an onboard implementation is beneficial (e.g., fire-detection, oil-spill detection, cyclone detection, and others).
To this aim, our targeted dataset would be composed of L0/L1 and L2A multispectral and hyperspectral images considering an image classification as a target application. In addition, we design the END2END model, processing L0/L1 images, to minimize the loss in accuracy compared to a baseline performing the same classification task in L2A images. The goal is to make the model robust to all errors that the preprocessing is supposed to vanish. Finally, we would compared the performance of our END2END model to the standard approach, inclusive of preprocessing chain and inference of DNN models on L2A images, to assess the advantages in terms of power-consumption and latency.
Ship detection plays a key importance in maritime surveillance as military applications, vessel traffic services, fisheries, and other commercial usage. Hence, it is a hot topic that can be benefitted by remote sensing tools since a continuous and non-cooperative monitoring over both open sea navigation routes and coastal areas characterized by dense human activities. The synthetic aperture radar (SAR) is an effective tool that allows routinely providing meter-resolution images of the Earth’s surface in both day & night under almost all-weather condition. Such imagery can be useful to detect, characterize and classify ships.
Among the different SAR imaging modes, polarimetry has shown to have great potential in providing valuable information for ship monitoring [1, 2]. Polarimetric SAR data are widely used to detect ships in a robust and effective way and to identify the type of the ship, i. e., cargo, tanker, cruising, etc. Most of the approaches based on polarimetric SAR data relies on the different scattering behavior between ships and the background sea, as well as among ships calling for different structures. Accordingly, fully polarimetric SAR has a unique potential for characterization and classification of target radar backscatter [3]. Nonetheless, the polarimetric scattering of the ships depends on SAR imaging parameters, including incident wavelength, spatial resolution, angle of incidence, sea state and ship features as structure and heading [4]. Among the different polarimetric analysis tools based on fully polarimetric SAR imagery, the polarization signatures can provide a better understanding and insights on the behavior of ship and ocean scattering [5, 6].
In this study, a full-polarimetric L-band UAVSAR airborne high-quality (extremely low noise floor) SAR dataset is exploited to analyze both the polarized and unpolarized backscattering of a small ship observed under different incidence angle. This is a unique opportunity to overcome the gap we found in literature, where similar analyses are performed but on heterogeneous datasets, i.e., different ships – often stationary – observed at sparse incidence angle under various sea state conditions [6]. The SAR dataset includes 7 SAR scenes collected over the Gulf of Mexico on 17 November 2016 over approximately 2 hours long acquisition, where an about 21 m x 7 m ship traveling at an average speed 16 km/h, is observed on a wide range of incidence angles, i. e., about 35° - 55°. Preliminary results show that useful information can be extracted by polarization signatures of polSAR features that can support the design of optimized and robust polarimetric ship detectors.
[1] G. Margarit, J. J. Mallorqui, J. Fortuny-Guasch and C. Lopez-Martinez, “Exploitation of ship scattering in polarimetric SAR for an improved classification under high clutter conditions”, IEEE Trans. Geosci. Remote Sens., vol. 47, no. 4, pp. 1224-1235, 2009.
[2] D. Velotto, F. Nunziata, M. Migliaccio, and S. Lehner, “Dual-polarimetric TerraSAR-X SAR data for target at sea observation,” IEEE Geosci. Remote Sens. Lett., vol. 10, no. 5, pp. 1114–1118, 2013.
[3] R. Touzi, W. M. Boerner, J. S. Lee, and E. Luneberg, “A review of polarimetry in the context of synthetic aperture radar: Concepts and information extraction,” Can. J. Remote Sens., vol. 30, no. 3, pp. 380–407, 2004.
[4] A. Marino, D. Velotto and F. Nunziata, “Offshore Metallic Platforms Observation Using Dual-Polarimetric TS-X/TD-X Satellite Imagery: A Case Study in the Gulf of Mexico,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 10, no. 10, pp. 4376 - 4386, 2017.
[5] J. J. Van Zyl, H. A. Zebker, and C. Elachi, “Imaging radar polarization signatures: Theory and observation,” Radio Sci., vol. 22, no. 4, pp. 529–543, Jul./Aug. 1987.
[6] R. Touzi, J. Hurley, and P. W. Vachon, “Optimization of the degree of polarization for enhanced ship detection using polarimetric radarsat-2,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 10, pp. 5403–5424, 2015.
The Mekong delta, inhabited by more than 20 million people, is among the most biologically diverse waterscapes and agriculturally productive of the world, but sea-level rise, land subsidence, and the upstream hydropower dams and extensive delta-based water infrastructure have raised concern due to potential impacts on the hydrology of the region. Furthermore, most of the Delta is below 2 m of the sea level and hence is highly vulnerable to the additive effects of regional pumping-induced land subsidence and sea-level rise due to global climate change. Critically, these two factors directly impact a variety of hazards that can be associated with subsurface saline intrusion, increases in the depth and duration of annual flooding, and naturally occurring arsenic contamination.
Large scale land subsidence can be measured using satellite-based SAR imagery processed by interferometry (InSAR) with high accuracy from space. Since 2015, the Sentinel-1 satellite provides open access and systematic data with 6 days’ revisit and 20 m spatial resolution. Thus, Sentinel-1 data offers the best opportunity for land subsidence monitoring. It is good but it is also a challenge because of the massive data which need to handle. Particularly, how to make sure the result is consistent over a full wide area. Here we can measure the average subsidence trend over the past 5 years for the entire Mekong Delta region in a single frame of Sentinel-1.
The recently advanced technique PSDS (permanent scatterers and distributed scatterers - PSDS) becomes the main InSAR tool for many deformation applications due to its superior performance. In this paper, we exploit the state-of-the-art PSDS approach to a better understanding of the capabilities of Sentinel-1 C-band in estimating subsidence phenomena in Mekong.
To test our performance, we exploited the SBAS processor from LiCSBAS. The combined technique PSDS was carried out by TomoSAR platform. We found that by using the more sophisticated algorithm from the PSDS TomoSAR platform, we can have a result that is denser distribution. More measurement points mean better performance. The combined technique is outperformance due to a better phase correction.
To do a validation, we consider comparing to the result of a small region analysis in Ho Chi Minh city. The cross-distribution indicates an identical spatial distribution with less than a 1 mm/year difference of velocity.
To our knowledge, this is the first-ever demonstration on Mekong Delta subsidence consistently thanks to our coherent PSDS InSAR processing. Finally, the result will be a model that serves for subsidence investigations applied in other areas similar to the Mekong Delta, where are also threatened by the combined impacts of subsidence and sea-level rise, increasing inundation hazard.
The coastal zones are challenging to study with satellite data because of extreme differences in land versus sea characteristics and shallow-water response to the observed signal. The study is further complicated by dynamically complex and highly variable weather systems (e.g., monsoons), numerous boundary currents (e.g., Gulf stream, Agulhas, etc.), oceanic dipoles, upwelling and downwelling, and dust/aerosol effects. However, these are also the areas that are critical to our survival. Despite the known limitations in satellite capabilities for coastal applications, the improving spatial, temporal and radiometric resolutions and stable sensor characteristics onboard both polar and geostationary imagers have made them attractive for such studies. As a result, there is an ever-growing thrust on coastal applications using satellite data in conjunction with other available information to get the best value from an integrated approach. In this context, much information about the coastal areas is readily available. However, an integrated and timely visualization of these datasets is still non-trivial as visualization technology is an evolving field. The Satellite Oceanography and Climate Division (SOCD) of NOAA STAR is actively pursuing an effort to ease such visualization and provide a knowledge base for coastal events and processes in a friendly and less resource-consuming web interface. We are conceptualizing the CEOS COAST Application Knowledge Hub (AKH) to enable simultaneous displaying of:
[a] satellite-based ocean parameters (biological, physical), [b] social data (population density, human base, vulnerability), [c] shoreline characteristics, [d] seabed properties (bathymetry, etc.), [e] station measurements (precipitation gauge, etc.), [f] a set of base maps to provide context, [g] waterways, [h] elevation, and [i] a set of curated major coastal events that caused significant damage (storms, HABs, etc.).
What separates this from other resources is the simultaneous use of satellite and non-satellite data and a lightweight application that focuses on different coasts globally. A range of map interactions and controls will ease the use of this tool in both conventional monitors and mobile displays. We will leverage existing tools and technology, e.g., the NOAA OceanView (https://www.star.nesdis.noaa.gov/socd/ov/) and the NASA WorldView (https://worldview.earthdata.nasa.gov/). The COAST AKH will be publicly released in 2023 and presented in detail. This preliminary presentation aims at sharing our vision, interactively soliciting suggestions and feedback, and gauging different features of interest.
The Coastal Erosion from Space is a project funded by ESA and its main objective is to determine the feasibility of using a range of satellite images (both optical and SAR) to monitor coastal change, collecting Coastal State Indicators (CSI) which describe the dynamic-state and evolutionary trends of coastal systems. This project wants to develop a global service for monitoring coastal erosion, environmental risk assessment, and research on the potential impact of climate change on the coast.
Within this activity, isardSAT has developed a processing chain to generate the costal change products using C-band Synthetic Aperture Radar (SAR) data from Sentinel 1, even if it is extendable also to other SAR missions with different bands. Working in all weather conditions and independently from the sunlight, Sentinel 1 provides very high spatial (10m x 10m) and temporal (6 days revisit time) resolution, allowing to monitor the coastal evolution with hundreds of free available satellite data under the Copernicus programme. This represents the main strength of SAR technologies in contrast with optical images that are unusable for this type of application when the Area of Interest (AOI) is even partially covered by clouds.
The methodology consists of four main processes:
Firstly, for each available S1 data, separately for ascending and descending tracks, the SAR available image is processed to generate in output a georeferenced image. This process, called also “pre-processing”, is composed by several sub-steps that have been developed in the SNAP toolbox provided by ESA.
The second process (composed by Enhancement, Segmentation, Healing, and Vectorisation) produces a vector line, named waterline (WL), that corresponds to the separation between land and water. It is possible to configure these sub-steps with a range of options, specifying a set of parameters in an input Configuration File. The main aim of this process is to improve the quality of the output, reducing as much as possible erroneous features that might appear in the initial estimation of the waterline.
Then, in the process called “Quality control” for each WL three parameters are computed:
- The distances x_i between the WL and a reference line.
- The angle between the coast and the satellite ground track.
- The density of lines.
Finally, in the last process, considering all the WLs and their distances from the reference line, the change rate product is calculated showing the evolution in time of the coast under analysis (erosion or accretion). To do that, a series of polygons along the reference line have been created. Each polygon is defined by a width 𝑤 and a length 𝑙 across the reference line and for each of them a change rate product is calculated, considering only the WLs and their distances included in that polygon. Then a second filtering is applied to discard possible outliers: from the distances values x_i included in each polygon, their mean μ and their standard deviation σ are calculated and only the distances values that satisfy the |x_i-μ|≤σ are used to calculate the change rate product for that polygon. If there is a big dispersion of WLs and σ>50m, then the WLs distances describe statistically a Gaussian Mixture Distribution (GMD) with 𝑘 components and the mean and the standard deviation used for the previous second filtering are derived from the component with the larger population. Once completed the second filtering, the remaining WLs distances are used to perform a linear regression analysis defining the change rate product as the slope of the linear relationship fitting the available data. The sign of the slope indicates if for that polygon there has been an erosion (negative inclination) or an accretion event (positive inclination). Moreover, the method provides a quality flag depending on the number of available data and the R-squared parameter to evaluate the goodness of fit between the calculated slope and the available used distances. Depending on the end user’s request, the tool presented in this poster is tuning and it is possible to provide the change rate with different:
- time sampling (weekly, monthly, annually, etc.).
- space sampling (defining properly the width amount of polygons).
The capacity of generating waterline time series globally with such temporal and space resolutions opens the door to analyse the specific external forcings that drive the observed changes for each specific beach. With this goal, waterline time series will be correlated with historical time series of relevant meteorological variables in coastal erosion such as wave conditions (height, direction and period), wind and sea level pressure. A data-driven approach will be implemented with the goal of training a machine learning model to relate the external forcings with their corresponding impacts on the shoreline. Once trained, this model will provide highly valuable information on what variables and meteorological conditions lead to most dramatic coastal erosion events which could be later used to adopt specific and effective adaptation measures.
Sea Level Rise (SLR) is one of the biggest socio-economic consequences of global climate change in the 21st Century. Advances in Earth Observation and its ability to monitor the Earth’s Essential Climate Variables (ECV’s) have enabled closure of the sea level budget within uncertainties since the start of the satellite altimetry in 1992, as demonstrated by the ESA CCI Sea Level Budget Closure program. As regional sea level change can deviate by multiple factors from the global mean; understanding and monitoring the processes that drive these non-uniform changes are key for policy makers and assessing local scale impacts.
Regional sea level is in part influenced by terrestrial mass redistribution and loading changes, which have gravitational, rotational and deformation (GRD) impacts on the Earth. The spatio-temporal pattern of these GRD effects is known as ‘Sea Level Fingerprints’ or Barystatic-GRD fingerprints. The magnitude and relative contribution of these fingerprints to regional sea level changes are expected to increase over the coming decades, based on current trajectories of land ice mass change. Accurate quantification of this phenomena is therefore crucial in closing the regional sea level budget in addition to other applications such as improving GRACE estimates of ice sheet mass balance, sea level reconstructions at tide gauges and studies into inter-basin ocean mass transport. A comprehensive Barystatic-GRD fingerprint product that spans the whole altimetry era is therefore essential a wide variety of Earth System studies.
Previous Barystatic-GRD fingerprint studies have mainly either focused on a single observation approach (e.g., GRACE), or been at annual temporal resolution. Here, we present monthly Barystatic-GRD fingerprints from the 1992 until present, resolving both the total fingerprint and the contribution of individual terrestrial components (Antarctica, Greenland, Glaciers, and hydrological mass changes). Our approach utilises multiple observations from the ESA CCI programmes, in addition to other contemporary products, in an ensemble modelling scheme. This ensures that both the uncertainties of the observations and potential biases between products are incorporated into the resulting fingerprint. This work is part of the Fingerprinting Approach to close Regional Sea Level Budgets using ESA-CCI (FACTORS) ESA CCI Research Fellowship.
Satellite sensors are used to monitor land and water on a large scale. One of the key variables affecting the water-leaving signal is suspended particulate matter and thus it is important to understand its properties to improve remote sensing algorithms. However, only a few studies investigating the variability of suspended particulate matter properties such as concentration, nature and size under different seasonal, weather and geographical conditions have been carried out in the Baltic Sea. The study area is located in the relatively shallow Pärnu Bay (mean depth of 4.7 m and maximum depth of 10 m), where sediments are transported by rivers, resuspended by wave action and advected by currents. Four field campaigns were conducted using a set of instruments measuring inherent optical properties (e.g. absorption, scattering, backscattering), auxiliary data (e.g. temperature, depth, salinity) as well as particle size distributions. The results show that the SPM concentrations, particulate absorption at 442 nm, mass-specific particulate scattering at 660 nm, and mass-specific particulate backscattering at 660 vary within weather conditions and location from 6.75–19.6 mg*L-1, 0.13–4.00 m-1, 0.17–0.74 m-1, and 0.003–0.013 m-1, respectively. These particle properties are described during a phytoplankton bloom, strong wind, and calm conditions. The spectral backscattering ratio, which in general is considered to be constant in bio-optical remote sensing algorithms, is wavelength-dependent and varies depending on the origin of the particles (organic and mineral matter), particle size distribution, weather conditions, and location (0.017–0.08 m-1). In situ particle size measurements in coastal waters of the Pärnu bay and laboratory, with calculated Junge exponent (2.5–3.5 in situ and 3.9–4.9 after sonication) and median diameter of particles (6.4–21.0 in situ and 2.8–5.2 after sonication) also show that resuspended fine clay particles agglomerate into flocs of > 30 µm in the brackish waters of the Baltic Sea having random shapes and different sizes.
Coastal environments are a transition zone between the land and the open sea, that gather different habitat types and support human activities. Total Suspended Matter (TSM), which reaches the coastal zones via rivers and streams, is one of the main water quality indicators. Ιt is comprised of organic and inorganic constituents, like phytoplankton, Colored Dissolved Organic Matter (CDOM), and sediment particles. While TSM is vital for the preservation of the river deltas and provides habitat for microorganisms, it can also accumulate and transport pollutants from the upper reaches of the rivers to their estuaries, with negative effects on the physical and human environment. The objective of this study is to detect, measure, and map TSM in the estuaries of the transboundary rivers of Axios, Strymonas, and Evros in Greece, utilizing Copernicus Sentinel-2 MSI and Sentinel-3 OLCI multispectral optical satellite data as well as Sentinel-1 OCN data (2020-2021). The three rivers are being shared between Greece and its neighboring countries, and their estuaries make up important wetlands and are protected by national and international conventions. According to CORINE Land Cover 2018, the drainage basins of the rivers are mainly covered by agricultural and forest and seminatural areas. Although several conventions and agreements have been signed, the environmental legislation regarding the use of transboundary water resources hasn’t been fully observed. That results in the contamination of Axios, Strymonas, and Evros rivers with phosphorus, nitrates, and ammonia from the industries, aquaculture, intensive livestock farming, and sewage treatment plants that are located in the basins. These pollutants can then be carried by TSM to the estuaries. Additionally, economic activities such as ports, fish farms, and tourist facilities are located in the wider estuarine areas and can be negatively affected by TSM. Thus, measuring and monitoring the TSM concentration in the aforementioned estuaries and their broader coastal areas is important for monitoring the condition of the area.
The detection, measuring and mapping of TSM were carried out after heavy precipitation events that took place in the transboundary river basins of Axios, Strymonas, and Evros, according to meteorological data that were acquired from the stations of the National Observatory of Athens (NOA). Also, TSM concentration in each estuary was monitored for one month after the events. First, Copernicus Sentinel-3 OLCI EFR open satellite data were downloaded from the Copernicus Open Access Hub platform, based on the dates that the extreme precipitation events occurred, aiming to determine the dates that the highest TSM concentration was observed in each estuary. That was implemented by composing natural color RGB images using the open-source ESA SNAP software. Then, Copernicus Sentinel-2 MSI Level-1C high-resolution satellite data were obtained, according to the Sentinel-3 images that were selected for each event. In total, 7 Sentinel-2 images for Axios’ and Strymonas’ and 6 for Evros’ estuary were downloaded, including one summer image for each study area. After the Sentinel-2 images were acquired, their preprocessing was carried out, which included resampling the images to 10 m/pixel resolution so that all bands have the same spatial resolution, applying cloud masks, and subsetting them using SNAP. Because of the complexity of the coastal areas that are being studied, the main processing of the Sentinel-2 images was carried out using the Case-2 Regional CoastColour
(C2RCC) processor available in SNAP. The processor was originally developed by Doerffer and Schiller (2007) and it was improved through the ESA CoastColour project (http://www.coastcolour.org/). The C2RCC uses Neural Networks (NNs) as basic technology and an extensive database of radiative transfer simulations of water-leaving radiances and top-of-atmosphere radiances, as well as different models, such as a 5-component bio-optical model. It relies on the Inherent Optical Properties (IOPs) of water, i.e., the processes of absorption and scattering that depend on the Optically Active Constituents (phytoplankton, CDOM, and sediments) present in water, to calculate the TSM concentration. Regional parameters, such as sea surface temperature, salinity, and atmospheric pressure, were set for each estuary in the processors’ user interface. These parameters were obtained from respective datasets. After performing atmospheric correction, the C2RCC uses the water leaving reflectances to determine the IOPs and finally calculates the TSM concentration (g/m3), using arithmetic conversion factors. The result is a new product that contains the TSM concentration (“conc_tsm”). For the visualization of each “conc_tsm” product, a color palette was applied, and the file was exported in .kmz format. After, the mapping of TSM concentration was performed in ArcGIS Pro software. To study the events more effectively, Copernicus Sentinel-1 Level-2 OCN satellite data, as well as wind direction and speed data from the meteorological stations of NOA were utilized to qualitatively determine the relationship between wind direction and speed and the observed TSM direction.
The C2RCC processor calculated effectively the TSM concentration in the river estuaries from the Sentinel-2 Level-1C images. TSM concentration in the wider estuarine areas exceeded the limit of 25 g/m3, set by the Directive 2006/44/EC of the European Parliament and of the Council, and was higher than 30 g/m3, especially right after the heavy precipitation events. The results were also compared with an image from a summer month for each estuary, and it was observed that in the case of Axios, TSM concentration reverted to normal limits one month after the event, but in the cases of Strymonas and Evros, the concentration remained higher. Also, it was found that the direction of TSM that was observed in the images matched with the wind direction, except for the case of Strymonas’ estuary. It is important to mention though that the presence of clouds at Evros’ study area did not allow the accurate mapping of TSM concentration. At last, high TSM concentration in the three estuaries is suspected to have caused negative ramifications to the living organisms, as well as to the economic activities that take place near the study areas. In conclusion, Copernicus open-access satellite data and open-source algorithms provided by the European Space Agency (ESA) can effectively contribute to the study of extreme natural events that affect both the physical and anthropogenic environment. In particular, detection and monitoring of TSM can add to the prevention of negative effects of high TSM concentration to the living organisms and human populations located near the estuaries and coastal areas as well as to the economic activities.
Water is a vital source for the Earth to function. Waterbodies are important sources of drinking water, food supplies, leisure activities, and trading routes for the humankind. Therefore, it is important to monitor water quality in seas, lakes and rivers. In situ measurements are usually scarce and expensive to conduct, but Copernicus programme satellite series have added a supplementary high frequency monitoring tool. Optically active substances (e.g. suspended particulate matter, phytoplankton, coloured dissolved organic matter) that could be monitored by optical satellite instruments are part of the material that impact the water quality.
This study focuses mostly on the suspended particulate matter and its properties, like backscattering and size distribution. The particle size distribution may vary between large scales comprising small single particles and flocs with various sizes and shapes. The size and nature of the particles impacts their sedimentation rate. For example, the smallest particles remain into the water column longer, increasing the turbidity of the water, whereas larger particles resettle faster.
The main aim of the study is to create a linkage between suspended particulate matter size distribution and the reflectance spectra, both measured in situ. It is the first step towards the monitoring of the particle size distributions by satellites. Afterwards, the satellite monitoring of particle size distributions could be an input for modelling the behaviour of the waterbody during dredging or how the construction or some other land use change in nearby areas could affect the water body of interest.
For that purpose, two optically very different study areas that are both highly impacted by suspended particles were selected. First site is the region of freshwater influence of the Rhone River (maximum depth around 100 m). The Rhone River is one of the biggest rivers in Europe and it is dominated by the suspended particles of mineral origin. The Rhone River discharges high quantities of suspended particulate matter into the Gulf of Lions, Mediterranean Sea (9.6 Mt/yr). Second site is the Pärnu Bay, which is also highly influenced by suspended particulate matter, especially during wind episodes. The particles suspended in the water column are originating from several sources: the bottom sediment resuspension induced by the wind due to the shallowness of the bay (maximum depth 10 m), the transport of particles by the Pärnu River, and advection by currents. Contrarily to the Rhone River, high quantities of dissolved organic matter are discharged by the Pärnu River into the bay, making it an optically complex waterbody.
The fieldworks were conducted in both areas in January 2020 and August 2018, respectively (Figure 1).
Figure 1: Station locations. A – Sentinel 3 OLCI L1 image in 14.08.2021 above Rhone River mouth; B – Sentinel-2 MSI L1 image in 31.10.2021 above Pärnu Bay.
The suspended particles concentrations varied from 0.81 to 20.8 mg*L-1 and from 5.5 to 19.6 mg*L-1 in the Rhone River mouth and the Pärnu Bay, respectively. The water optical properties (e.g. absorption, scattering, backscattering coefficients), reflectance spectra and particle size distributions were measured at both areas with Wet Labs AC-S, ECO-VSF3, TriOS Ramses, and LISST-DEEP/LISST-100X, respectively. The total absorption coefficients varied from 0.08 to 0.8 m-1 and from 0.7 to 14.6 m-1, backscattering coefficients from 0.003 to 0.1 m-1 and from 0.02 to 0.2 m-1 in the Rhone River region of freshwater influence and the Pärnu Bay, respectively, during the fieldworks.
Firstly, the Junge parameters and backscattering slopes of the surface are computed from in situ particle size distribution and backscattering data. Then, the backscattering coefficients are modelled by implementing in situ reflectance spectra and absorption data into the QAA algorithm. The modelled backscattering coefficients are compared with in situ backscattering coefficients. The aim of this step is to simulate the retrieval of the backscattering coefficient from the remote sensing reflectance. Then, the backscattering slopes computed from modelled backscattering coefficients are linked with Junge exponents from in situ data by drawing a correlation and computing the linear equation to obtain the Junge exponents from in situ reflectance spectra. Finally, the results obtained from two areas are compared between them.
The study of the coastal zone is of crucial importance due to both anthropomorphic activities (e.g., mining of seashore sand for building purposes) and natural phenomena (e.g., coastal erosion), which threaten the stability of land and safety of people. However, the monitoring of coastal areas is not trivial due to the presence of different kinds of habitats that include coastal plain, coastal beaches, rocky shorelines, and sand bars. For this reason, the monitoring of the coastal area is very challenging.
Within this context, remote sensing plays an important role and the Synthetic Aperture Radar (SAR), due to its all-day and almost all-weather capabilities, together with a wide area coverage, is a key instrument.
This study is to analyze the time-variability of the coastline of Basilicata region, which is located in the southern part of Italy, using a time-series of Sentinel-1 C-band SAR scene collected under the Interferometric Wide (IW) HH+HV mode. The main goal is to develop multi-polarization [1] and multi-temporal methods to effectively monitor coastline and to characterize the level of erosion that is threatening the coastal region of Basilicata. The test site was selected since it is severely affected by coastal erosion that makes monitoring a very important issue.
The experimental part consists of extraction coastlines using the time series of S1 imagery collected from 2015 up to 2020 and to inter-compare them to infer about their time variability. The SAR-based results will be contrasted with optical ones and with in situ information provided by local authorities.
Preliminary results show that: a) the proposed methodology well detects the changes that occurred due to coastal erosion; b) SAR detected changes well fit the ones obtained by optical measurements.
[1] Nunziata F., Buono A., Migliaccio M., Benassai G. (2016), “Dual-Polarimetric C- and X-Band SAR Data for Coastline Extraction" IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (J-Stars), Volume: 9, Issue: 11, Pages: 4921 - 4928
The global mean sea level rise (MSLR) will persist in all land-sea boarders around the world, whether anthropogenic actions on the environment decrease or not. Recent studies showed an increase in global warming closer to 2 degrees until 2030 and a MSLR of 9 mm per year in the worst IPCC scenarios. Thus, understanding the consequences of the increase in the MSLR will help to generate mitigation plans at all levels of population settlements – communities to countries. The MSLR hazards comprehension has been intensely discussed in conferences (e.g., COP26), reports (e.g., IPCC 6th Assessment Report – IPCC-AR6), and other initiatives such as the United Nations Ocean Decade. However, many coastal cities in Brazil do not have monitoring systems or plans to deal with climate changes, and there is no guideline for policy makers on their communities mainly concerning the MSLR. This study forecasts the area and population impacted by the MSLR for the year 2050 in an island – the municipality of Ilha Comprida, in the state of São Paulo, Southeastern Brazilian coast. The island has 74 kilometers of uninterrupted beaches and its total area of 192 km2 is fully included in an environmental protection area. Except for a small mountain ~40 meters high, most of the island has altitudes between 3 and 5 meters. This study case could represent a methodological framework to be reproduced for other Brazilian coastal zones thanks to the use of available remote sensing data sources. This study used urban areas (UAs) limits from a Brazilian annual land use and land cover mapping project (MapBiomas), based on annual composites of Landsat imagery for 2020. Sentinel-2 true colour scenes for the same year were used to validate and adjust the MapBiomas land use and land cover classes. The population projection for 2050 was estimated by the Brazilian Institute of Geography and Statistics (IBGE). This population projection is based on the latest census of 2010, and it was spatially associated with the UAs classification map, based on the dasymetric approach. The estimation of the mean sea level in 2050 was calculated using the value of the mean sea level in 1990 plus the annual MSLR rate and the value of the highest historical observed tide. The initial mean sea level value in 1990 was obtained from a tide gauge located in Cananéia, a city neighbouring Ilha Comprida. The highest historical tide value was taken from a tide table available online (https://tabuademares.com/br/so-paulo/ilha-comprida). The MSLR projection for 2050 was acquired from the latest IPCC-AR6 report, available through the NASA Sea Level Projection Tool web viewer. The flooded regions in the study area were estimated using the MSLR projection applied to the digital elevation model (DEM) obtained from the Shuttle Radar Topography Mission (SRTM). Finally, the maps of population distribution over UAs and the flooded regions were combined in a geographical information system, to derive a MSLR risk map for Ilha Comprida in 2050. A 2050 estimated population of 12,332 inhabitants was obtained. The estimated population was spatially redistributed over the UAs resulting in a mean population density equal to 543 inhab/km2 and a more accurate population distribution than just considering the total area of the municipality (~60 inhab/km2). This difference occurs because Ilha Comprida has large extents of protected areas associated with a “demographic void” in the region. Also, UAs must not extend over protected areas, therefore the population must be allocated within the existing urban infrastructure. The mean sea level projection for 2050 was 1.99 meters higher than 1990, which after adding the maximum historical tide value reached 3.38 meters. From approximately 17 km2 of UAs, 18% will be impacted by the MSLR. In total, 11% of the total population in 2050, about 1350 inhabitants, deserve attention in the study area. The larger flooded regions were located in the north part of the island where erosion also contribute to the loss of land-territory to the sea. This transport of sediments was reported by previous studies based on long-term satellite monitoring, which is another useful information derived by the use of remote sensing data. Although not very large flooded areas were predicted, they will occur in some of the highly populated UAs (> 1000 inhab/km2). These flooded areas will especially impact tourism, which is the main economic source of local income, also decreasing the sandy area and invading the commercial infrastructure on the shore. Although the projection of future scenarios for the area can be refined, our results already show that Ilha Comprida needs careful attention from government planning agencies and the full involvement of the local community in the construction of a new local urban land use plan. This strategy can certainly be extended to many of the Brazilian coastal communities where MSLR will imply changes in land use and land occupation.
Coastal instabilities are widespread along the central coastline of Principality of Asturias (N Spain). A-DInSAR technique based on Sentinel-1 constellation is applied in this study to assess the cliffs activity in the Peñas Cape area. This area, with an extension of 187 km2, is characterized by present a total population of 393,364 inhabitants and to be conformed by five administrative areas. The methodology consisted in the followed steps: i) processing of 113 Sentinel-1 images (in descending trajectory) using the Persistent Scatterer Interferometry software of the Geomatics Division (PSIG) of Centre Tecnològic de Telecomunicacions de Cataluña (CTTC) and the Geohazard Exploitation Platform (GEP) of European Space Agency (ESA); ii) selection of local areas according to landslide database of Principality of Asturias (BAPA - Base de datos de Argayos del Principado de Asturias); iii) estimation of VLOS and deformation time series; iv) validation of A-DInSAR results by means of field campaign; analysis and interpretation of A-DInSAR results at regional and local scales. Regarding results, VLOS rates estimated were 1.7 to 3.7 cm year-1 and -2.3–3.8 cm year-1 by the PSIG software and the GEP platform, respectively. These ground motion could be related with coastal instabilities, both at regional and local scales. In addition, BAPA database has allowed us to interpret some local areas with important activity of coastal instabilities. The combination of two A-DInSAR approaches and BAPA database have demonstrate the potential of A-DInSAR techniques to evaluate coastal instabilities. Also, we have been able to improve the knowledge about coastal instabilities along the rocky coast of Central Asturias. Finally, different limitations associated to the application of A-DInSAR techniques in cliff coast were discussed. Future research will involve improving the interpretation of the A-DInSAR results from: i) re-processing in ascending trajectory for the estimation of horizontal and vertical components and ii) estimation of VSLOPE from VLOS measurements.
The impact of storms on the coastal zone produces a series of coastal hazards as beach erosion, overwash and inundation. The consequences of these processes are large damages in existing infrastructures, affectation of coastal uses and disturbance of coastal ecosystem services. Furthermore, these impacts and consequences are expected to grow with increasing storm activity and sea-level rise due to climate change.
One important process associated to storms is the resulting Extreme Coastal Flooding (ECF) which is the addition of the storm surge, the astronomical tide, and the wave runup. Within this context, the main aim of this work is to present a remote sensor-based Extreme Coastal Flood Potential Indicator (ECFPI) and its application to the St. Louis region in Senegal, Western Africa.
The methodology to obtain the ECFPI consisted of characterizing the recipient (beach) and the driving forces (ECF). The recipient was typified by obtaining the maximum sub-aerial beach elevation by using digital elevation models derived from the Pléiades satellite imagery using stereo-photogrammetry (Tavneau et al., 2021) and acquiring the beach slope by extracting the coastlines from Sentinel 2 and Landsat satellite images in combination with tide data from FES2014b ocean tide model (Vos et al., 2020). The driving forces, (ECF) were depicted as a three-step procedure. The first step consisted of identifying and ranking the most energetic storms based on the energy content (Mendoza et al., 2011) using the Wave Watch III and the ERA-Interim wave database. The second step consisted of characterizing the tide and storm surge associated to the identified extreme storms. In the first case, the FES2014b ocean tide model (Carrere et al., 2016) was used. In the second case the storm surge was obtained through the Dynamic Atmosphere Correction (Carrere et al., 2016). The third step consisted of using an empirical parameterization to obtain extreme runup (Stockdon et al. 2006) associated to these storms combining wave data and the extracted beach slope. The addition of these three parameters was used to estimate the ECF induced by the storms.
The ECFPI was obtained using the ECF (storm surge, the tide, and the runup) and divided by the maximum elevation of the maximum sub aerial coastal elevation. If the ECFPI values were < 0.7%, the value was ranked as Low. If the values were > 0.7 and < 1.0 the values were ranked as Moderate. If the values were > 1.0 they were ranked as High.
The methodology was applied to a nine-kilometer stretch in the Langue de Barbarie region, Senegal through a series of transects placed every twenty meters from North to East (478 transects in total). This example was done supposing that a 25-year return period storm (which occurred in November 2018 and ranked as one of the most energetic storms from the 1979- 2020 wave data) reached the coast with its present time beach configuration (using a 2021 digital elevation model of the region).
The elevation of the beach within these transects varied from 0.8 m to 6.19 m and the beach slope ranged from 0.01 to 0.2. The results show that, for this type of event with the same tide and storm surge conditions using a present-day coastal configuration, 68% of the coast presented High potential flood values, 20% of the coast presented Moderate values and 12% of the coast presented Low values.
The anthropogenic pressure and the effects of climate change (e.g., sea level rise) on coastal regions drive to a greater need for regular, accurate and up-to-date information about intertidal environments. Identification of areas of increased vulnerability is vital to ensuring that communities and habitats are protected from further erosion and damage. The detection, extraction and monitoring of land/water interface is a necessary step to build reliable coastal indicators. The later are fundamental to determining the parameters for modelling the morpho-hydrodynamics of coastal areas, for flood forecasting, and for coastal management.
Spaceborne monitoring of intertidal environments is essentially performed through analysis of waterlines. Large areas of coastline can be simultaneously imaged by different seensors, allowing for the generation of waterline data up to the resolution of the sensor. The diurnal cycle, weather conditions such as cloud strongly impact the number of optical observations, thus the generation of the waterline from optical sensors is challenging. Synthetic Aperture Radar (SAR) observations present an excellent option for coastal monitoring in such areas, particularly they are useful for waterlines monitoring.
In the framework of the Climate Change Impact and Ecosystem Resilience Project, we have developed a processor for the detection of coastal waterlines based on Copernicus Sentinel-1 mission using C-band SAR products.
In this presentation we propose an automatic approach for determination and post-processing of SAR waterlines from Sentinel-1 GRD images. Time-series of waterlines successfully capture areas of greatest variance over the available Sentinel-1 products.
Sensitivity study has been undertaken to assess the processor performance and the impact of the polarization, topographic shadow etc on the waterlines detection.
The comparison with optical waterlines and with in-situ measurements showed an error of about 1-2 pixels (10-30m) over stable coastal areas.
Furthermore, we provide some insights about the potential of this technique using data provided by new and future satellite missions.
Currently, more than 40 % of the world’s population lives in coastal regions, a quarter of which situated less than 10 meters above sea level. Floods, saltwater intrusion, tsunamis, subsidence and erosion are among the natural risks to which coastal zones are exposed. The coastal region of Vietnam is particularly vulnerable to these risks because urbanization, agri- and aquaculture, tourism, infrastructure and industry are competing for the low lying, narrow and attractive areas close to the coast. Therefore, a profound understanding and monitoring of coastal processes is crucial to protect the environment, infrastructure and people.
Coastline change analysis for Vietnam was conducted for a time series from 1984 to 2021 employing a cloud-based processing strategy on the Google Earth Engine. The analysis is based on Landsat-derived annual 75th percentile Modified Normalized Difference Water Index (MNDWI) composites, representing the mean high-water level and was executed for the entire shoreline of Vietnam. Contours were extracted on sub-pixel level. Linear regressions were calculated along shore-normal transects for quantifying coastline change rates. A hotspot analysis identified coastal segments with highest erosion and accretion rates. Coastal segments are considered hotspots when five neighboring transects are either erosional or accretional with a mean change rate of at least 5 m/yr. The transects were created with an interval of 200 m, which means that a hotspot includes at least 1,000 m of coastline. A validation of the automatic Landsat-based coastline detection yielded a sub-pixel accuracy of 8 m on a single Landsat acquisition. The results of the country-wide analysis show that the accumulated sum of accretion and erosion led to a slight increase of land area. Yet on local level, the erosion and accretion patterns are varying to a large extent. Erosion hotspots are located mainly at the coasts of the Mekong Delta (e.g. Cau Mau, Tien Giang, Bac Lieu) and the Nam Dinh province. The longest accretion segment of 39 km is located close to Hai Phong city.
The central Vietnamese Thua Thien Hue province belongs not to the major hotspots of coastline dynamics. Yet, it is a good example why the understanding and monitoring of coastline dynamics is of high relevance. The province is home to the largest lagoon in Southeast Asia – the Tam Giang-Cau Hai lagoon. The lagoon is separated by an elongated and narrow sandy barrier from the sea, that serves as protection from waves, coastal flooding and storm surges for the lagoon and its hinterland. In addition, several settlements (communes) are located on the barrier, agri- and aquaculture are cultivated and recreational sites exist.
The coastline change quantification for Hue province showed that more than half of the coast has been classified as predominantly stable with annual change rates of around 0.5 m/yr. However, 26 % of Hue‘s coast was found to be erosional with change rates higher than -0,5 m/yr on average, while 20 % mainly accreted with more than 0.5 m/yr. Hence, the total of Hue’s coasts slightly eroded by -0.13 m/yr on average. Yet, along the narrow barrier section south of the Thuan An inlet, intense average erosion rates were identified for the past 35 years.
Five local hotspots with strong coastline change rates have been identified for Hue province with locally adapted hotspot identification parameters (transect interval of 100 m; minimum change rate of 3 m/yr). Three of which are erosional with maximum erosion rates of -15 m/yr and maximum accretion rates of + 18 m/yr and are located at the lagoon inlets, as well as two accretional. One of the erosion hotspots is located at the Thuan An inlet. It shows an average erosion rate of -4 m/yr and a length of 900 m. The erosional process has not been constant over time. While strong erosion started only in the 2000s, accretion predominated at the hotspot from 1988 to 1999. After 2014 the coastline dynamics stabilized. On the opposite side of the lagoon inlet, the reverse pattern could be observed. The headland has been classified as accretion hotspot with average accretion rates of +3 m/yr and a length of 1700 m. Phases of erosion and accretion also alternated here. Both processes are most likely linked through sediment redistribution. Severe erosion hotspots have further been identified north and south of the lagoon inlet in Tu Hien in the south of Hue province. The average erosion rate is between -3 and -4 m/yr and stretches over more than 5 km, only interrupted by the lagoon inlet. The coastal erosion has been rather constant and is not accompanied by equivalent accretion in the neighboring segment, indicating a deficit in the local sediment budget. A severe accretion hotspot is located north of Binh An with an average change rate of + 9 m/yr. According to visual interpretation, the accretion has been caused by the construction of a new harbor.
Overall, even though most parts of Hue`s coastline remained stable over the past 35 years, strong coastline changes have been observed locally. The causes for coastline changes are multifold and interrelated. But it is reasonable to conclude that the erosion hotspots in Hue were caused by dynamic sediment redistribution as well as reduced coastal sediment availability. Accretion emerged both through natural sediment displacement and as a result of direct human intervention through stabilization, construction and land reclamation.
Most of the Dutch, German and Danish North Sea coast is commonly referred to as the ’Wadden Sea‘. Its main characteristics are intertidal flats, i.e., coastal areas that fall dry once during each tidal cycle. Such intertidal flats can also be found on the U.K east and west coast, along the French Atlantic coast, in South Korea and in northwest Africa. Since the German Wadden Sea became an UNESCO World Natural Heritage, a frequent surveillance of the entire area is mandatory. As the areas of interest are often difficult to access, remote sensing techniques are a useful tool to observe them.
Morphological changes such as erosion or even displacement of whole sandbanks is a common phenomenon on intertidal flats, as they are constantly exposed to strong tidal currents. There are many advantages in using Synthetic Aperture Radar (SAR) sensing techniques for the surveillance of these dynamics: SAR sensors in particular have the advantage that they are not dependent on cloudfree weather conditions, which is especially important regarding the usually high cloud coverage on the German North Sea coast. In addition, since the launch of the two Sentinel-1 satellites in 2014 and 2016, a very good temporal coverage of several SAR acquisitions per week is given.
Waterline detection on SAR data faces some challenges, because the intensity of the signal backscattered from water and from wet sediment may be very similar. Also, the local wind conditions have a strong influence on the usefulness of the SAR data. Following Wiehle and Lehner (2015), an algorithm was developed to extract waterlines from SAR data using the so called “Edge-Drawing” method. By combining the obtained waterlines with tide gauge data recorded at the time of the SAR acquisition, contour lines of the intertidal flats can be obtained, resulting in a three-dimensional elevation model of the area of interest. Based on elevation models for different time periods, morphological changes can be monitored, whose knowledge will lead to a better understanding of the morphological dynamics on intertidal flats.
Our test site is an intertidal area on the German North Sea coast, north of the mouth of the river Elbe. We used five years of Sentinel-1A/B dual-polarization SAR data and generated timeseries of elevation models of that area. Our method allowed the identification of areas of strong sediment loss (erosion) and gain (accretion), the former even resulting in a cut through an elongated sand flat, which led to a separation of a small island from the coast. In the future, our method might also contribute to a better understanding of morphological processes in the Wadden Sea related to sea level rise.
Wiehle, S, Lehner, S., 2015: Automated Waterline Detection in the Wadden Sea Using High-Resolution TerraSAR-X Images, J. Sensors 2015, 450857.
Coastal zones are increasingly exposed to human exploitation and environmental forcing from sea level rise caused by climate change, with additional threats posed by altered storm frequency, magnitude and trajectories, depending on location. Coastal ecosystems and landforms, are critical parts of resilient coastlines, buffering environmental hazards, sequestering carbon, forming natural habitats that support global biodiversity, and providing recreational spaces and livelihoods. While we, as a society, have been good at developing ‘solutions’ that allow us to ‘adapt’ over the short-term to these types of challenges, the prediction (on the basis of first principles) of the evolution of engineering interventions at the coast over decades has thus far eluded us. The fact that we cannot accurately predict coastal evolution over decadal time scales makes it difficult to plan ahead and prevent us from ‘boxing ourselves in’ to a state in which coastal flood and erosion risks escalate.
To address the complex and uncertain challenges faced by coastal zones, national and regional authorities in a number of countries have begun to deploy large-scale nature-based adaption schemes and strategies that incorporate the restoration or re-creation of dynamic sedimentary coastal landforms. A relatively new approach, the sand engine (or sand motor), has been the subject of an experiment in the management of dynamic coastlines in the Netherlands where a large sandbar-shaped ‘beach peninsula’ was artificially created covering about 1 km² in 2011 using approximately 21.5 million m3 of material. As was expected, waves, wind and currents moved the sand over the years and realigned the coast, providing a range of societal and economic benefits. While beaches in this area had been artificially replenished every five years, the sand engine is expected to make replenishment unnecessary for the next 20 years. Over time this method is thus likely to be more cost effective than current practices and also helps nature by reducing the repeated disruption of sensitive habitats caused by replenishment.
To assess the success of such adaption schemes and policy responses as well as plan future intervention in light of the dynamic nature of mobile sediment placements to wave / tidal forcing, effective monitoring is needed to draw reliable conclusions about their efficacy and guide future actions to minimise economic, social and ecological impacts. Conventional coastal monitoring may be considered somewhat fragmented across space and time with a number of approaches being employed which do not adequately address the full dynamic nature of these features. For instance, infrequent airborne surveys capture a synoptic view over reasonably large areas, but at a single point in time which will not record subtle changes and the behaviour of shorter term processes. Point measurements can provide temporal detail, but lack the ability to capture the spatially specific and interrelated changes within the coastal zone system. Until quite recently Earth Observation has also not been able to acquire data with the required spatial and temporal specifications, but now the establishment of constellations of smallsats offers an opportunity to better address coastal zone monitoring.
In 2019 the United Kingdom LaunchPad Initiative, based on the European Space Agency (ESA) Business Applications Small Advanced Research in Telecommunications Systems (ARTES) Apps process with the support of the Satellite Applications Catapult (SAC), which hosts the UK Ambassador Platform for ESA Business Applications, funded four projects looking at coastal zone issues. One of these, Coastal Dynamics Monitor (CoDyMon), tested the feasibility of a service exploiting high temporal frequency and fine spatial resolution optical Earth Observation data for capturing the changing extent and character of intertidal sedimentary features associated with coastal adaption schemes for climate change and sea level rise.
The proposed service was based on data from Planet’s constellation of Dove cubesats, known as Planetscope, which gives daily revisit, 3 m spatial resolution and 4 bands (now becoming 8 bands) of spectral data in the visible / near infrared parts of the spectrum. Each acquisition was processed to map water, bare sediment and vegetated surface extents from the cloud free areas. These products when combined with local tide gauge data allowed the extraction of multiple height-assigned waterline contours which were compiled over short time periods depending on the number of viable observations. The core offering of the service was therefore 3-dimensional morphological change information in terms of elevation and extent for each period. This information would support management and monitoring of adaption schemes providing evidence to guide management triggers and illustrate the changes to the general public.
The service was demonstrated before, during and after the delivery of the Bacton to Walcott Coastal Management Scheme, the placement of a large volume of sediment in the intertidal zone to extend the width and height of the beach. The scheme was designed to protect the Bacton Gas Terminal and the villages of Bacton and Walcott in North Norfolk UK and, due to the reduced scale compared to the Dutch sand engine, is more accurately referred to as a sandscaping scheme. It involved 1.5 - 1.8 million m3 of sediment being deposited on the coast during a 4 week period in July 2019. The demonstration used 22 images from early June 2019 through to January 2020 which were group into 4 periods for which morphological models were produced.
The proof of concept showed the potential of the service to deliver beach elevation data comparable with conventional approaches, but also being superior in terms of systematic and regular update, information content and cost-effective implementation for the end user. It also raised a number of issues that needed to be addressed before a robust, commercial and fully operational service can be provided. Positive interest has been cultivated across the stakeholder community and evaluation of the results are continuing, although slower than expected given the Covid-19 situation at the time of completing the work. The project team have prepared a business road map to cover further engagement with the stakeholder community, progress towards a commercial and operational service, and the potential submission of bids for future support.
Most of the world's coastal environments are experiencing the negative effects of climate change combined with anthropogenic activities. In order to limit those effects, policy makers and managers need to understand the process that is taking place on the coast. Where the safety of goods and persons, but also the tourism economy linked to the preservation of beaches and the natural heritage of these interface environments, rich in biodiversity (mangroves, coral reefs, seagrass beds) are the primary concerns. CARIB-COAST is an international project led by the BRGM (https://www.carib-coast.com/en/). Its ambition is to pool, co-construct and disseminate approaches to monitoring, coastal risk prevention and adaptation to climate change in the Caribbean. An analysis of existing monitoring practices allowed to better constrain the needs for coastal monitoring in the Caribbean territories and propose a strategy based on the experiences carried out in the different participating countries. In this respect, pilot sites have been identified in Guadeloupe, Martinique, Saint-Martin, Jamaïca, Puerto Rico and Trinidad and Tobago to strengthen or initiate coastal monitoring approaches and assess the role of coastal ecosystems like mangroves, seagrass and coral reefs in the protection against coastal erosion. Many studies focus on Coastal vulnerability Index (CVI), one of the predictive approaches to coastal classification by incorporating various coastal variables. This index-based method simplifies a number of complex and interacting parameters, and is widely used to measure vulnerability of the coastlines. Given the extent of the area of interest, the diversity of pilot sites and variables to be integrated in the CVI calculation, and the intent to develop a common approach for all areas, this study has taken advantage of current available remote sensing data to construct a vulnerability index.In this study, the proposed Ecosystem-based Coastal Vulnerability Index (EBCVI) involves a wide range of variables, mainly focused on ecosystem characteristics, represented by diverse data types, and their contribution to coastal protection against erosion. In particular, we used remotely-sensed data to contribute to (i) current mangrove restoration actions, (ii) the evaluation of coral reefs evolution and their role on erosion and (iii) the promotion of seagrass protection.
The calculation of the EBCVI relies on:
● the production of mangroves distribution maps, using high resolution Sentinel 2 data in a supervised classification approach.
the use of seagrass and coral reefs distribution and types maps provided by the Allen Coral Atlas
● the calculation of surface metrics (texture, density, fragmentation…), from very high resolution Pleiades and Planetscope images, describing ecosystems at a fine scale.
● the application of a multi-criteria analysis to these metrics combined with other variables (geomorphology, socio-economics...) in order to map coastal vulnerability by assessing EBCVI values for each pilot site. The knowledge of local partners will allow a weighting of the importance of each criteria in the index.
This index has the potential to be used as a management tool in order to assess impacts of climate change on Caribbean coasts through the development of existing observatories and good practices around a common protocol, and triggering (iv) new initiatives in the Caribbean region, enabling an appropriate implementation of short and long term policies dedicated to coastline protection.
The CATDS (Centre Aval de Traitement des Données SMOS) is the French ground-segment facility in charge of the generation, calibration, archiving and dissemination of the SMOS level 3 and level 4 science products. It processes Sea Surface Salinity (SSS, also named Ocean Salinity, OS) and Soil Moisture (SM). CATDS is also providing services to users and scientists.
More specifically, it is in charge of:
Processing of the SMOS level 3 and 4 science data from level 1B data received from the ESA Data Processing Ground Segment (DPGS),
Re-processing of the SMOS level 3 and 4 data,
Catalogue and archive of the level 3 and 4 products, including the auxiliary data, the calibration data, and the level 1B data used for the processing,
Calibration and validation of the SMOS L3 and L4 products,
Dissemination of the SMOS level 3 and level 4 products to the users,
Assistance and support to SMOS L3 and L4 users,
Potential feedback towards DPGS (new algorithms, calibration functions, and software).
CATDS involves two main components:
- A production centre (called C-PDC) which routinely produces and disseminates L3 and L4 data.
- 2 expertise centres (called C-EC) which host the definition of algorithms, assess the quality of the products and provide specific information to users.
The C-PDC processing chain which generates SM and SSS/OS L3 data from L1B products is composed of several processors. Some of them are based on the ESA DPGS’s prototypes. The main difference with level 2 products is that DPGS products are organized by half orbits, whereas C-PDC processing is based on multi orbit (from daily to monthly products).
One of the goal of the C-PDC L3 processors is to select correct input data and to reject the dubious ones. These L3 processors perform temporal analysis, time aggregations and spatial and temporal averages.
The Ocean products generated and distributed are:
• L3 debiased daily SSS maps
• L3 10 day and monthly averaged debiased SSS maps
• L4 SMOS/SMAP/ISAS (In Situ) Optimal Interpolation SSS maps
In debiased products, the SSS is corrected from land-sea contamination and latitudinal seasonal biases. An additional salinity field provide the SSS corrected from rain freshening. NRT products are delivered within one day.
The Land products generated and distributed are:
• L3 daily SM maps
• L3 3-day, 10-day and monthly agregated SM maps
• L4 root zone soil moisture maps
• L4 3-day SM maps disagregated at 1km (work in progress)
Each product is referenced with a DOI.
A web site (www.catds.fr) presents the products and gives information related to the operational status. It includes an online catalogue which lists, describes and gives access to the products. The website also includes a tool (maps.catds.fr) to search, browse and visualize the main fields of the generated products.
Users can access the products either through FTP or through the Sipad, an interactive Web based tool which allows aggregations and sub-settings.
A single point of contact (support@catds.fr) operated by the production center allows users:
- To obtain support either technical or scientific on the products and their usage
- To give feedback about the products
We present here the CATDS production centre with a focus on the products available after the latest reprocessing campaign (2020-2021). We also describe the L3 data dissemination process and the way to retrieve CATDS L3/L4 products.
The Soil Moisture and Ocean Salinity (SMOS) retrieved sea surface salinity (SSS) represent the longest satellite salinity time series; this has led to the setting up of the ESA Climate Change Initiative (CCI) SSS project, which aims to merge SMOS, Aquarius and Soil Moisture Active Passive (SMAP) salinities. The main results obtained with SMOS salinities concern the detection of interannual variations with a better synopticity than that obtained from in situ measurements and over a longer time periods than that obtained with Aquarius and SMAP missions, the detection of large mesoscale structures with a spatial resolution (~45km) two to three times better than that obtained with Aquarius measurements and comparable to that obtained with SMAP measurements, the identification of density-compensated structures, the detection of the influence of freshwater fluxes (rainfall, river plumes) on salinity and, by extension, on the density of sea water.
This paper will review some of the main achievements concerning the small-scale features detected with SMOS SSS related to fresh water inputs to the ocean and its redistribution by ocean circulation. It will show the scientific value of increasing the spatial resolution of satellite salinity measurements, supporting the development of the new SMOS high resolution (SMOS-HR) concept. The SMOS-HR interferometric system aims at providing a spatial resolution of 10km while maintaining an uncertainty on the individual measurement at this resolution of the same order as the SMOS measurement at ~45km resolution (~0.5 to 1pss on daily measurements).
When looking at comparisons between satellite SSS and Argo upper salinities, 20% of the std difference between CCI SSS (at 50km resolution) and Argo SSS can be explained by the SSS variability between ~10km and 50km as estimated by the GLORYS Mercator reanalysis. At the local scale, this variability can represent more than twice the std difference between satellite SSS and Argo SSS.
SMOS and SMAP SSS have been used to assess ocean circulation and salt transport in the ocean by eddies [Delcroix et al., 2019; Hasson et al., 2019; Melnichenko et al., 2021]. Indeed, this transport is one of the processes contributing to salinity variations that needs to be taken into account to link salinity and freshwater fluxes, to better characterize water exchanges between upper and deep ocean and, between coastal and open ocean regions. Improved spatial resolution would allow better characterization of mid-latitude mesoscale and sub-mesoscale variability, improve the signal-to-noise ratio and extend these capabilities to the polar oceans.
The assimilation of SMOS and SMAP SSSs in ocean models [Martin et al., 2019; Tranchant et al., 2019] has been shown to improve the accuracy of simulated SSSs by 7 to 12% depending on the models used and the regions, but these authors also point that improving the spatial resolution down to the order of 10 km would represent an even more significant breakthrough [Martin et al., 2020].
In the Gulf of Guinea, [Alory et al., 2021] have shown, by coupling model simulations and satellite measurements, that changes in geostrophic currents and vertical stratification are largely controlled by salinity that weakens the coastal upwelling by about 50% near the mouth of the Niger River. This phenomenon has important consequences for fisheries resources, as the upwellings generate nutrient inputs at the surface [Dossa et al. 2021]. At low resolution, ocean model SSS and SMOS SSS are in good agreement. Nevertheless, the spatial resolution of SMOS does not allow a detailed description of the structure of the near-shore SSS, while very strong gradients are present and ocean model simulations remain subject to uncertainties related to river discharge, parameterizations of coastal processes, such as mixing, internal tide...
In the northwestern tropical Atlantic, during the Eurec4A in situ campaign in February 2020, SMOS measurements allowed the identification of a plume of desalinated water from the Amazon and showed that a significant part of this water was transported offshore, with a volume equivalent to the discharge from the Amazon River in January [Reverdin et al., 2021]. There, the small scale features structure the spatial distribution of the air-sea CO2 fluxes. An improved spatial resolution would allow a detailed description of the processes responsible for the complex circulation (coastal current influenced by bathymetry, filaments, eddies...) of coastal water masses, as shown by a comparison of ocean colour maps (kilometre resolution) and SSS (the spatial structures of Chla and SSS are very often qualitatively coherent in this area).
At high latitudes, SMOS allows to detect spatial variability of salinity in the Arctic Ocean, in particular related to river plumes [Olmedo et al., 2018; Supply et al., 2020; Tarasenko et al., 2021], but the measurements are highly polluted by the proximity of ice. Hence, the need for increased resolution is even stronger. The need is driven by a description of mesoscale phenomena (in the Arctic Ocean, the synoptic scales are naturally small of the order of 1-10km); a better monitoring of desalination related to ice melts, by allowing to approach to ~10km of the ice edge against ~50km with SMOS and SMAP satellite measurements; a better monitoring of desalination from river discharges, as much of the freshwater in the Arctic flows along the coast. These plumes are very poorly reproduced by the models due, among other things, to the lack of knowledge of river flows; a better filtering of small-scale ice that strongly contaminates the satellite SSS.
Alory, G., C. Y. Da-Allada, S. Djakouré, I. Dadou, J. Jouanno, and D. P. Loemba (2021), Coastal Upwelling Limitation by Onshore Geostrophic Flow in the Gulf of Guinea Around the Niger River Plume, Frontiers in Marine Science, 7(1116), doi:10.3389/fmars.2020.607216.
Delcroix, T., A. Chaigneau, D. Soviadan, J. Boutin, and C. Pegliasco (2019), Eddy-Induced Salinity Changes in the Tropical Pacific, Journal of Geophysical Research: Oceans, 124(1), 374-389, doi:https://doi.org/10.1029/2018JC014394.
Dossa A.N., G. Alory, A.C. da Silva, A.M. Dahunsi and A. Bertrand, 2021. Global Analysis of Coastal Gradients of Sea Surface Salinity. Remote Sensing, 13, 2507, doi: 10.3390/rs13132507
Hasson, A., J. T. Farrar, J. Boutin, F. Bingham, and T. Lee (2019), Intraseasonal Variability of Surface Salinity in the Eastern Tropical Pacific Associated With Mesoscale Eddies, Journal of Geophysical Research: Oceans, 124(4), 2861-2875, doi:https://doi.org/10.1029/2018JC014175.
Martin, M. J., R. R. King, J. While, and A. B. Aguiar (2019), Assimilating satellite sea-surface salinity data from SMOS, Aquarius and SMAP into a global ocean forecasting system, Quarterly Journal of the Royal Meteorological Society, 145(719), 705-726, doi:https://doi.org/10.1002/qj.3461.
Martin, M. J., E. Remy, B. Tranchant, R. R. King, E. Greiner, and C. Donlon (2020), Observation impact statement on satellite sea surface salinity data from two operational global ocean forecasting systems, Journal of Operational Oceanography, 1-17, doi:10.1080/1755876X.2020.1771815.
Melnichenko, O., P. Hacker, and V. Müller (2021), Observations of Mesoscale Eddies in Satellite SSS and Inferred Eddy Salt Transport, Remote Sensing, 13(2), 315.
Olmedo, E., C. Gabarró, V. González-Gambau, J. Martínez, J. Ballabrera-Poy, A. Turiel, M. Portabella, S. Fournier, and T. Lee (2018), Seven Years of SMOS Sea Surface Salinity at High Latitudes: Variability in Arctic and Sub-Arctic Regions, Remote Sensing, 10(11), 1772.
Reverdin, G., et al. (2021), Formation and Evolution of a Freshwater Plume in the Northwestern Tropical Atlantic in February 2020, Journal of Geophysical Research: Oceans, 126(4), e2020JC016981, doi:10.1029/2020jc016981.
Supply, A., J. Boutin, J.-L. Vergely, N. Kolodziejczyk, G. Reverdin, N. Reul, and A. Tarasenko (2020), New insights into SMOS sea surface salinity retrievals in the Arctic Ocean, Remote Sensing of Environment, 249, 112027, doi:https://doi.org/10.1016/j.rse.2020.112027.
Tarasenko, A., A. Supply, N. Kusse-Tiuz, V. Ivanov, M. Makhotin, J. Tournadre, B. Chapron, J. Boutin, N. Kolodziejczyk, and G. Reverdin (2021), Properties of surface water masses in the Laptev and the East Siberian seas in summer 2018 from in situ and satellite data, Ocean Sci, 17(1), 221-247, doi:10.5194/os-17-221-2021.
Tranchant, B., E. Remy, E. Greiner, and O. Legalloudec (2019), Data assimilation of Soil Moisture and Ocean Salinity (SMOS) observations into the Mercator Ocean operational system: focus on the El Niño 2015 event, Ocean Sci., 15(3), 543-563, doi:10.5194/os-15-543-2019.
The European Space Agency (ESA) Soil Moisture and Ocean Salinity mission (SMOS) provides continuous global maps of soil moisture and ocean surface salinity. Its payload MIRAS (Microwave Imaging Radiometer with Aperture Synthesis) is an L-band (1400 MHz - 1427 MHz) interferometric radiometer which achieves unprecedented spatial resolution at this frequency. It was successfully launched on the 2nd November 2009 under the European Space Agency Earth Explorer program, and has been acquiring high precision data for the past 10 years since the end of the commissioning phase. More than ten years after launch, the SMOS team continues to improve the calibration and the image reconstruction processes.
The SMOS team has recently published the 3rd Mission Reprocessing data, which includes the changes in calibration and image reconstruction that have been made to the Level 1 Operational Processor (L1OP) version v724 during the past few years. The new L1 processor incorporates several improvements to the calibration and image reconstruction algorithms. The present paper analyzes in detail the L1 reprocessed dataset's quality and stability, by comparing the SMOS results with a model over a well-known, stable region over the Pacific Ocean. Model is based on salinity provided by the SEANOA's In Situ Analysis System (ISAS), converted to Brightness Temperature (BT) with the SMOS L2 forward model. Aggregated BT from the SMOS Reprocessing campaign is then compared with model, and biases are assessed. The results presented here confirm the improvement on the data quality expected for the newer baseline. In particular, it is demonstrated that long term stability has greatly improved with the newer baseline, along with other reference quantitative metrics such as short-term stability. Inspection of the results has allowed to confirm the existence of some remaining biases and open points. These are described and analyzed, establishing the basis for their improvement in following reprocessing campaigns.
The Changing-Atmosphere InfraRed Tomography explorer (CAIRT) is a mission candidate for ESA Earth Explorer 11 (EE11) to be launched in 2031 or 2032 that has been selected for a phase 0 preliminary study among three other candidates. This mission has been proposed in order to achieve a step change in our understanding of the coupling of atmospheric circulation, composition and regional climate by quantifying: (A) the middle-atmosphere circulation change, (B) the atmospheric gravity wave momentum flux and wave driving, (C) the change in stratospheric ozone due to transport and chemistry, (D) the impact of transient solar events and space weather on climate variability, (E) the upper troposphere and lower stratosphere (UTLS) aerosol composition and precursor gases and (F) the UTLS variability and its impact on tropospheric composition and air quality. This can be achieved by atmospheric tomography through infrared limb imaging. The CAIRT concept proposes to perform tomography of the atmosphere from the troposphere to the lower thermosphere (about 5 to 115 km altitude) with a swath of 500 km and having high spatial and spectral resolution to provide a three-dimensional picture of atmospheric structure at unprecedented scales. Flying in loose formation with the Second Generation Meteorological Operational Satellite (MetOp-SG) will enable combined retrievals with observations by the New Generation Infrared Atmospheric Sounding Interferometer (IASI-NG), as well as from the other nadir sounders, resulting in consistent atmospheric profile information from the surface to the lower thermosphere.
While a general introduction of CAIRT is given in session B2.01 (The Earth Explorer 11 Candidate Missions), this poster will introduce CAIRT with a focus on trace species in the stratosphere and mesosphere.
Observations in limb geometry from satellite platforms are very valuable for monitoring the stratospheric ozone layer on a global scale, as they provide information with high spatial and temporal coverage and good vertical resolution. At the University of Bremen, observations from two limb sounders, SCIAMACHY (2002-2012) and OMPS-LP (2012-present), were retrieved using the same radiative transfer model, spectroscopic databases and a similar retrieval algorithm.
These two data sets were merged to obtain a consistent time series of global ozone distribution. Because of the short overlap of the two missions, measurements performed by the MLS instrument have been used as a transfer function, to provide a statistically significant bias estimate. Monthly latitude- and longitude-resolved time series of ozone profiles were calculated, exploiting the high spatial resolution of the data sets. We used this merged data set to study long-term ozone changes: a multi-linear regression model was applied over the period 2003-2020, finding positive significant ozone trends between 35 and 45 km at mid-latitudes, with an increasing ozone concentration of up to 2-4% per decade. Negative but statistically non-significant changes were found in the lower tropical stratosphere. We noticed vertically consistent patterns in the longitude-resolved trends, particularly at northern mid- and high-latitudes above 30 km and in the tropical lower stratosphere.
We then performed simulations using the TOMCAT global 3-D chemistry transport model (CTM), forced by ERA5 reanalyses, for the same period. We compare our trend results from the merged data set with the model simulations to check the consistency of the detected zonal and longitudinally-resolved patterns. We focused in particular on the trend structure identified at northern high-latitudes, where larger positive values are found over the Atlantic sector, whereas close-to-zero changes are detected over the Siberian/Pacific sector. Seasonally resolved trends provided a valuable insight into this zonal asymmetry, with the largest variability with longitude found in spring and wintertime and a good consistency between observations and the CTM. Since limb-scattering observations cannot sound polar night conditions, TOMCAT simulations proved to be a necessary tool to better understand wintertime trends. By comparing ozone changes, with trends in the temperature and meridional wind fields from ERA5, we attempted to investigate the driving mechanisms of these asymmetries, and found them to be mainly dynamically driven. Dedicated TOMCAT runs, with a satellite-coincident sampling or with repeated meteorology, were relevant for this study.
This work aims to understand the distribution of NO2 and NO (collectively called NOx) in the Upper Troposphere – Lower Stratosphere (UTLS), with a focus on the Asian monsoon region. Observations of NO2 from the Optical Spectrograph and InfraRed Imager System (OSIRIS), the Atmospheric Chemistry Experiment (ACE), and the Stratospheric Aerosol and Gas Experiment (SAGE) III on the International Space Station (ISS) are considered, along with NO observations from ACE. The PRATMO photochemical box model is used to calculate NOx based on OSIRIS and SAGE III/ISS NO2 and O3 observations. The satellite data are compared to NOx from the Whole Atmosphere Community Climate Model (WACCM). We find a low NO2 anomaly from 100 to 60 hPa over the Asian continent in the summer months. The NO and NOx anomalies are elevated over the same region and time frame. There is very good agreement between WACCM and the instrument data. Sensitivity tests with PRATMO show that the elevated NOx is caused by the colder temperatures and lower O3 levels within the monsoon.
Chlorine dioxide (OClO) is a by-product of the ozone depleting halogen chemistry in the stratosphere. Although being rapidly photolysed at low solar zenith angles (SZAs) it plays an important role as an indicator of the chlorine activation in polar regions during polar winter and spring at twilight conditions because of the nearly linear dependence of its formation on chlorine oxide (ClO).
We compare slant column densities (SCDs) of chlorine dioxide (OClO) obtained from measurements performed by the TROPOspheric Monitoring Instrument (TROPOMI) with meteorological data and CALIPSO Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) polar stratospheric cloud (PSC) observations for both Antarctic and Arctic regions.
The observed OClO SCDs are generally well (anti-) correlated with the meteorological conditions in the polar winter stratosphere: e.g. the chlorine activation signal appears as a sharp gradient in the time series of the OClO SCDs once the temperature drops to values well below the Nitric Acid Trihydrate (NAT) existence temperature T(NAT). Also a relation of enhanced OClO values at the lee sides of mountains can be observed at the beginning of the winters indicating a possible effect of mountain lee waves on chlorine activation. OClO SCDs also coincide well with CALIOP measurements for which PSCs are detected.
Very high OClO levels are observed for the northern hemispheric winter 2019/2020 with an extraordinarly long lasting and stable polar vortex being even close to the values found for Southern Hemispheric winters. In the extraordinary southern hemispheric winter 2019 a minor sudden stratospheric warming at the beginning of September was observed. In this winter similar OClO values were measured as in the previous (usual) winter till that event but with a 1 -- 2 week earlier OClO deactivation.
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is a limb-imaging Fourier-Transform spectrometer (iFTS) providing mid-infrared spectra with high spectral resolution (0.0625 cm-1 in the wavelength range 750-1400 cm-1). GLORIA, a demonstrator for the Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT, one of the candidates selected for Phase 0 for Earth Explorer 11) was deployed on the Russian M55 Geophysica and is still being deployed on HALO, the German high-altitude research aircraft. In order to enhance the vertical range of GLORIA observations to the middle stratosphere albeit still reaching down to the middle troposphere, the instrument was adapted to measurements from stratospheric balloon platforms. GLORIA-B performed its maiden flight from ESRANGE/Northern Sweden in August 2021 during the KLIMAT 2021 campaign in the framework of the EU Research Infrastructure HEMERA.
The objectives of GLORIA-B observations for the campaign have been primarily its technical qualification and the provision of a first imaging hyperspectral limb-emission dataset from 5 to 36 km altitude. Scientific objectives are, amongst others, the observation of the evolution of the upper tropospheric and stratospheric chlorine and nitrogen budget/family partitioning in a changing climate in combination with the set of 20 MIPAS-B (Michelson Interferometer for Passive Atmospheric sounding-balloon) flights since the mid-1990ies, the observation of the BrONO2 (bromine nitrate) evolution during sunset/sunrise, in synergy with BrO observations by the TotalBrO instrument (Univ. Heidelberg) on the same gondola, as well as the quantification of pollution of the Arctic upper troposphere/lower stratosphere through forest fires.
In this contribution we will demonstrate the performance of GLORIA-B with regard to level-1 (calibrated spectra) as well to level-2 data, consisting of retrieved altitude profiles of a variety of trace gases. These results will be characterized through uncertainty estimations, averaging kernels as well as comparisons with externally available datasets.
The nascent tourism-based space race, accelerated by recent billionaire-funded programs, necessitates a detailed examination of the modern and future impact of space launches on the Earth’s atmosphere. We develop and implement an air pollutant emissions inventory from rocket launches and re-entry ablation, for a contemporary scenario (emissions in 2019) and for proposed space tourism launch rates, in the chemical transport model GEOS-Chem coupled to a radiative transfer model. Ozone (O3) depletion due to rocket emissions is caused by chlorine from solid rocket fuel (66%) and re-entry ablation emissions of nitrogen oxides (34 %), while the impact of other pollutants (H2O, alumina) is negligible. The impact of contemporary rocket emissions on global mean stratospheric O3 depletion is small ( < 0.01 % loss), in agreement with previous studies. The most severe O3 depletion is concentrated in the Arctic upper stratosphere, the same region of the atmosphere which is otherwise recovering most rapidly following the restriction of O3 depleting substances by the Montreal Protocol. Over the course of a decade of contemporary and space tourism emissions, O3 depletion due to rockets threatens to undermine 13 and 20 % of springtime, Arctic upper stratospheric O3 recovery, respectively. Black carbon (soot) emissions from hydrocarbon-fuelled rockets results in a substantial average global warming of 12 mW m-2 after just 3 years of routine space tourism launches. This contribution to global soot radiative forcing (9%) is vastly disproportionate to the contribution to total soot emissions ( < 0.001%) of all other BC sources. This is because rockets deposit soot directly into the stratosphere and mesosphere, resulting in a warming per unit emitted mass approximately 600 times greater than surface and aviation sources. These O3 depletion and radiative forcing results provide clear evidence of the need to regulate both fuel types and launch rates of a space-tourism industry poised for rapid growth.
The Swedish led Odin satellite was launched in February 2021 and the OSIRIS instrument onboard Odin began operation shortly afterward. OSIRIS measures vertical radiance profiles of spectrally dispersed limb scattered sunlight and these measurements are used to retrieve distributions of stratospheric trace gas species including ozone and nitrogen dioxide and also stratospheric aerosol extinction profiles. At the time this abstract was written, OSIRIS has been in operation for over twenty years and continues to makes its measurements with no apparent decrease in quality.
Over the past twenty years the OSIRIS data has been used to investigate long term change in the global distribution of the vertical ozone profile. Over the past fifteen years, the OSIRIS ozone data record, on its own and within merged data products, has been used within many international initiatives including the SPARC Data Initiative, the SPARC sponsored Si2N initiative and the currently active LOTUS initiative. The OSIRIS ozone data record is the longest running, currently collected vertical ozone profile data set and as such its contribution to these initiatives, and the resulting work that feeds into the WMO ozone assessments, is significant.
This paper will highlight recent changes to the data record to correct for known pointing issues and known instrumental changes such as temperature dependent spectral response functions and wavelength registrations drift. Also included in the presentation will be a discussion of the newest merged ozone data record produced by the University of Saskatchewan team, the SAGE II/OSIRIS /SAGE III ISS product. The most up to date trend analysis on this data record, using both MLR and DLM techniques, will be featured.
Remote sensing of atmospheric state variables typically relies on the inverse solution of the radiative transfer equation. An adequately characterized retrieval provides information on the uncertainties of the estimated state variables as well as on how any constraint or a priori assumption affects the estimate. The SPARC activity TUNER (Towards Unified Error Reporting) aims at providing guidelines for the useful characterization of remotely sounded temperature and composition data of the atmosphere. Reported characterization data should be intercomparable between different instruments, empirically validatable, grid-independent, usable without detailed knowledge of the instrument or retrieval technique, traceable, and still have reasonable data volume. The latter may force one to work with representative rather than individual characterization data. Many errors derive from approximations and simplifications used in real-world retrieval schemes, which are reviewed in this paper, along with related error estimation schemes. The main sources of uncertainty are measurement noise, calibration errors, simplifications and idealizations in the radiative transfer model and retrieval scheme, auxiliary data errors, and uncertainties in atmospheric or instrumental parameters. Some of these errors affect the result in a random way, while others chiefly cause a bias or are of mixed character. Beyond this, it is of utmost importance to know the influence of any constraint and prior information on the solution. While different instruments or retrieval schemes may require different error estimation schemes, we provide a list of recommendations which should help to unify retrieval error reporting.